<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Understanding | Haobin Tan</title><link>https://haobin-tan.netlify.app/tags/understanding/</link><atom:link href="https://haobin-tan.netlify.app/tags/understanding/index.xml" rel="self" type="application/rss+xml"/><description>Understanding</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Thu, 21 Jul 2022 00:00:00 +0000</lastBuildDate><item><title>OSI Model</title><link>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/osi_model/</link><pubDate>Thu, 11 Mar 2021 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/osi_model/</guid><description>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-14%2015.16.23.png" alt="截屏2021-03-14 15.16.23" style="zoom:80%;" />
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Layer Nr&lt;/th>
&lt;th>Layer Name&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>7&lt;/td>
&lt;td>Application&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>6&lt;/td>
&lt;td>Presentation&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>5&lt;/td>
&lt;td>Session&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>4&lt;/td>
&lt;td>Transport&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>3&lt;/td>
&lt;td>Network&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>2&lt;/td>
&lt;td>Data Link&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>1&lt;/td>
&lt;td>Physical&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h2 id="mnemonic">Mnemonic&lt;/h2>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-14%2015.18.03.png" alt="截屏2021-03-14 15.18.03" style="zoom:67%;" />
&lt;h2 id="how-does-data-flows-the-osi-model-layers">How does Data Flows the OSI Model Layers?&lt;/h2>
&lt;p>Client makes request to server&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/OSI_client2server.gif" alt="OSI_client2server" style="zoom:67%;" />
&lt;p>Server response&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/OSI_server2client.gif" alt="OSI_server2client" style="zoom:67%;" />
&lt;p>The round trip of how data flows through all these seven layers on both sides is a &lt;strong>physical path&lt;/strong>, on which data actually and physically flows.&lt;/p>
&lt;p>The OSI model also addresses another aspect how data flows on a &lt;strong>logical path&lt;/strong>, layer to layer commnunication.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-14%2015.39.23.png" alt="截屏2021-03-14 15.39.23" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Layers&lt;/th>
&lt;th>Sender&lt;/th>
&lt;th>Receiver&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Application&lt;/td>
&lt;td>generate data&lt;/td>
&lt;td>read data&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Presentation&lt;/td>
&lt;td>encrypt and compress data&lt;/td>
&lt;td>decrypt and decompress data&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Session&lt;/td>
&lt;td>&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Transport&lt;/td>
&lt;td>choke up data into segments&lt;/td>
&lt;td>put segments together&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Network&lt;/td>
&lt;td>make packets&lt;/td>
&lt;td>open packets&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Data Link&lt;/td>
&lt;td>make frames&lt;/td>
&lt;td>open frames&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Physical&lt;/td>
&lt;td>&lt;/td>
&lt;td>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h2 id="osi-model-layer-by-layer">OSI Model Layer by Layer&lt;/h2>
&lt;h3 id="application-layer">Application Layer&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Non-technical: &lt;strong>user&amp;rsquo;s application&lt;/strong> (E.g. Chrome, Firefox )&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Technical: refers to application protocols&lt;/p>
&lt;ul>
&lt;li>E.g. HTTP, SMTP, POP3, IMAP4, &amp;hellip;&lt;/li>
&lt;li>Facilitate communications between application and operation system&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>Application data is generated here&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h3 id="presentation-layer">Presentation Layer&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Provides a variety of coding and conversion functions on application data&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Ensure that information sent from the application layer of the client could be understood by the application layer of the server&lt;/p>
&lt;p>$\rightarrow$ Try to translate application data into a certain format that every different system could understand&lt;/p>
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>Main functions&lt;/p>
&lt;ul>
&lt;li>Data conversion&lt;/li>
&lt;li>Data encryption&lt;/li>
&lt;li>Data compression&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>Protocols&lt;/p>
&lt;ul>
&lt;li>Images: JPEG, GIF, TIF, PNG, &amp;hellip;&lt;/li>
&lt;li>Videos: MP4, AVI, &amp;hellip;&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;h3 id="session-layer">Session Layer&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Establish, manage, and terminate connections between the sender and the receiver&lt;/p>
&lt;/li>
&lt;li>
&lt;p>An intuitive example&lt;/p>
&lt;figure>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/OSI_Session_Example.gif"
alt="Telephone call is a good example to explain session layer: First establish the connection and start the conversation. Then terminate the session">&lt;figcaption>
&lt;p>Telephone call is a good example to explain session layer: First establish the connection and start the conversation. Then terminate the session&lt;/p>
&lt;/figcaption>
&lt;/figure>
&lt;/li>
&lt;/ul>
&lt;h3 id="transport-layer">Transport Layer&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Accept data from Session layer&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Choke up data into segments&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Add header information&lt;/p>
&lt;ul>
&lt;li>E.g. destination port number, source port number, sequence number, &amp;hellip;&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>Protocols: &lt;strong>TCP and UDP&lt;/strong>&lt;/p>
&lt;/li>
&lt;/ul>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/OSI_transport.gif" alt="OSI_transport" style="zoom:67%;" />
&lt;h3 id="network-layer">Network Layer&lt;/h3>
&lt;ul>
&lt;li>Protocol: &lt;strong>Internet Protocol (IP)&lt;/strong>&lt;/li>
&lt;li>Take segment from Transport layer and add extra header information
&lt;ul>
&lt;li>E.g. sender&amp;rsquo;s and receiver&amp;rsquo;s IP address&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Create packet&lt;/li>
&lt;/ul>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/OSI_Network.gif" alt="OSI_Network" style="zoom:67%;" />
&lt;h3 id="data-link-layer">Data Link Layer&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>When IP packet arrives at this layer, more header information will be added to the packet&lt;/p>
&lt;ul>
&lt;li>E.g. source and destination MAC address, FCS trailer&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>Ethernet frames are created&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/OSI_DataLink.gif" alt="OSI_DataLink" style="zoom:67%;" />
&lt;/li>
&lt;/ul>
&lt;p>MAC address is physical address for your Network Interface Card (NIC). At this layer, NIC has crucial job of creating frames on the sender side, and reading or destroying frames on the receiver side.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/MAC_address.gif" alt="MAC_address" style="zoom:67%;" />
&lt;h3 id="physical-layer">Physical Layer&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Accept frames from Data Linker layer and generate bits&lt;/p>
&lt;/li>
&lt;li>
&lt;p>These bits are made of electrical impulses or lights&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Through the network media, the data travels to the receiver&lt;/p>
&lt;p>$\rightarrow$ It completes the whole journey of seven layers on the sender side&lt;/p>
&lt;/li>
&lt;/ul>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/OSI_physical.gif" alt="OSI_physical" style="zoom:67%;" />
&lt;h2 id="reference">Reference&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>OSI Model 👍&lt;/p>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/nFnLPGk8WjA?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;/li>
&lt;/ul></description></item><item><title>Circuit Switching Vs. Packet Switching</title><link>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/circuit_vs_packet_switching/</link><pubDate>Thu, 11 Mar 2021 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/circuit_vs_packet_switching/</guid><description>&lt;h2 id="tldr">TL;DR&lt;/h2>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Switching Network&lt;/th>
&lt;th>Characteristics&lt;/th>
&lt;th>Suitable for&lt;/th>
&lt;th>Use Cases&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Circuit Switching&lt;/td>
&lt;td>A dedicated channel or circuit is established for the duration of communications&lt;/td>
&lt;td>Communications which require data to be transmitted in real time&lt;/td>
&lt;td>Traditional telephone calls&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Packet Switching&lt;/td>
&lt;td>Connected through many routers, each serving different segment of network&lt;/td>
&lt;td>More flexible and more efficient if some amount of delay is acceptable&lt;/td>
&lt;td>Handles digital data&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h2 id="circuit-switching">Circuit Switching&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>A dedicated channel or circuit is established for the duration of communications.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>The method used by the old traditional telephone call, carried over the &lt;strong>Public Switched Telephone Network (PSTN)&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Also referred to as the &lt;strong>Plain Old Telephone Service (POTS)&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Ideal for communications which require data to be transmitted in real time&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Normally used for traditional telephone calls&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>This is what a typical traditional telephone network look like.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-11%2017.10.10.png" alt="截屏2021-03-11 17.10.10" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>The PSTN networks are connected through central offices, which act as telephone exchanges, each serving a certain geographical area.&lt;/p>
&lt;p>When person A calls Person B:&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Circuit_switching.gif" alt="Circuit_switching" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="packet-switching">Packet Switching&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>Packet switching networks are connected through many routers, each serving different segment of networks&lt;/p>
&lt;/li>
&lt;li>
&lt;p>How it works?&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_1.gif" alt="Packet_Switching_1" style="zoom:67%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_2.gif" alt="Packet_Switching_2" style="zoom:67%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_3.gif" alt="Packet_Switching_3" style="zoom:67%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_4.gif" alt="Packet_Switching_4" style="zoom:67%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_5.gif" alt="Packet_Switching_5" style="zoom:67%;" />
&lt;/li>
&lt;li>
&lt;p>More flexible and more efficient if some amount of delay is acceptable&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Normally handle digital data&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;p>Circuit Switching vs. Packet Switching&lt;/p>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/B1tElYnFqL8?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div></description></item><item><title>Multiprotocol Label Switching (MPLS)</title><link>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/mpls/</link><pubDate>Thu, 11 Mar 2021 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/mpls/</guid><description>&lt;h2 id="recall-packet-switching-and-circuit-switching">Recall: Packet Switching and Circuit Switching&lt;/h2>
&lt;p>Suppose an IP packet is sent from Mumbai, India to Kansas City, Kansas using &lt;strong>Packet switching&lt;/strong>&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_Example_1.gif" alt="Packet_Switching_Example_1" style="zoom: 50%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_Example_2.gif" alt="Packet_Switching_Example_2" style="zoom:50%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_Example_3.gif" alt="Packet_Switching_Example_3" style="zoom:50%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_Example_4.gif" alt="Packet_Switching_Example_4" style="zoom:50%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/Packet_Switching_Example_5.gif" alt="Packet_Switching_Example_5" style="zoom:50%;" />
&lt;p>Packet switching is flexible and data path is not fixed. But processing IP information at every router &lt;strong>slows down&lt;/strong> transmission.&lt;/p>
&lt;p>In contrast, &lt;strong>circuit switching&lt;/strong> method is a fixed-path switching method. It is reliable but more expensive.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-11%2018.16.54.png" alt="截屏2021-03-11 18.16.54" style="zoom:67%;" />
&lt;h2 id="mpls">MPLS&lt;/h2>
&lt;p>MPLS allows IP packets to be forwarded at layer 2 (switching level) without being passed up to layer 3 (routing level).&lt;/p>
&lt;p>Let&amp;rsquo;s take a look how MPLS works with the same IP packet sent from Mumbai, India to Kansas City, Kansas.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/MPLS_Example_1.gif" alt="MPLS_Example_1" style="zoom:50%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/MPLS_Example_2.gif" alt="MPLS_Example_2" style="zoom:50%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/MPLS_Example_3.gif" alt="MPLS_Example_3" style="zoom:50%;" />
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/MPLS_Example_4.gif" alt="MPLS_Example_4" style="zoom:50%;" />
&lt;p>These routers act like switches on a local network.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-11%2018.30.34.png" alt="截屏2021-03-11 18.30.34" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>As a result, MPLS offers potentially faster transmission than traditionally packet switching networks 👏.&lt;/p>
&lt;p>&lt;strong>In summary, MPLS can create end-to-end paths that act like circuit-switched connections, but deliver layer 3 IP packet.&lt;/strong>&lt;/p>
&lt;p>As we know, routing is the layer 3 function while switching is the layer 2 function. MPLS makes those routers on the Internet act like switches on a local network.&lt;/p>
&lt;p>$\rightarrow$ MPLS is also called &lt;strong>2.5 layer protocol&lt;/strong>.&lt;/p>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;p>MPLS - Multiprotocol Label Switching (2.5 layer protocol)&lt;/p>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/BuIWNecUAE8?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div></description></item><item><title>Control Plane Vs. Data Plane</title><link>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/control_vs_data_plane/</link><pubDate>Fri, 12 Mar 2021 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/control_vs_data_plane/</guid><description>&lt;p>Abstract view on an IP router&lt;/p>
&lt;p>![截屏2021-03-12 10.22.31](&lt;a href="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/">https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/&lt;/a>截屏2021-03-12 10.22.31.png)&lt;/p>
&lt;h2 id="control-plane">Control Plane&lt;/h2>
&lt;p>&lt;strong>Determines/controls how data packets are forwarded&lt;/strong> — meaning how data is sent from one place to another.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Responsible for&lt;/p>
&lt;ul>
&lt;li>Creating a routing table&lt;/li>
&lt;li>populating the routing table&lt;/li>
&lt;li>drawing network topology forwarding table and hence enabling the data plane functions&lt;/li>
&lt;/ul>
&lt;p>$\rightarrow$ Here the router makes its decision&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Routers use various &lt;a href="https://www.cloudflare.com/learning/network-layer/what-is-a-protocol/">protocols&lt;/a> to identify network paths, and they store these paths in routing tables.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="data-plane--forwarding-plane">Data Plane / Forwarding Plane&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>In contrast to the control plane, which determines how packets should be forwarded, &lt;strong>the data plane actually forwards the packets.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Data plane packet goes through the router and incoming and outgoing of frames are done based on control plane logic.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="summary">Summary&lt;/h2>
&lt;p>Think of the control plane as being like the stoplights that operate at the intersections of a city. Meanwhile, the data plane (or the forwarding plane) is more like the cars that drive on the roads, stop at the intersections, and obey the stoplights.&lt;/p>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>&lt;a href="https://www.cloudflare.com/learning/network-layer/what-is-the-control-plane/">What is the control plane? | Control plane vs. data plane&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://www.geeksforgeeks.org/difference-between-control-plane-and-data-plane/">Difference between Control Plane and Data Plane&lt;/a>&lt;/p>
&lt;/li>
&lt;/ul></description></item><item><title>TCP</title><link>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/tcp/</link><pubDate>Mon, 15 Mar 2021 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/tcp/</guid><description>&lt;p>&lt;strong>TCP&lt;/strong> = &lt;strong>T&lt;/strong>ransmisson &lt;strong>C&lt;/strong>ontrol &lt;strong>P&lt;/strong>rotocol&lt;/p>
&lt;h2 id="how-tcp-starts-and-closes-session">How TCP starts and closes session?&lt;/h2>
&lt;p>Three stages of TCP&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-03-15%2011.43.05.png" alt="截屏2021-03-15 11.43.05">&lt;/p>
&lt;ol>
&lt;li>&lt;a href="#session-starting">Session starting&lt;/a>&lt;/li>
&lt;li>&lt;a href="#data-transmission">Data transmission&lt;/a>&lt;/li>
&lt;li>&lt;a href="#session-ending">Session ending&lt;/a>&lt;/li>
&lt;/ol>
&lt;p>This three stages make TCP a connect-oriented and reliable protocol. 👏&lt;/p>
&lt;p>Suppose a client wants to get web pages from a server. Three stages below will be gone through.&lt;/p>
&lt;h3 id="session-starting">Session Starting&lt;/h3>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-03-15%2011.31.50.png" alt="截屏2021-03-15 11.31.50">&lt;/p>
&lt;p>&lt;strong>Three-way handshake&lt;/strong> to start a session&lt;/p>
&lt;ol>
&lt;li>Client sends a single &lt;code>SYN&lt;/code> packet to the server, asking for a session&lt;/li>
&lt;/ol>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2011.13.10.png" alt="截屏2021-03-15 11.13.10" style="zoom:67%;" />
&lt;blockquote>
&lt;p>Client: Hi, server, do you want to talk?&lt;/p>
&lt;/blockquote>
&lt;ol start="2">
&lt;li>
&lt;p>Server replies with a &lt;code>SYNACK&lt;/code> packet (The server acknowledges the client&amp;rsquo;s request, and ask client for a talk)&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2011.27.59.png" alt="截屏2021-03-15 11.27.59" style="zoom:67%;" />
&lt;blockquote>
&lt;p>Client: Hi, server, do you want to talk?&lt;/p>
&lt;p>Server: Yes, I want to talk. Do you want to talk?&lt;/p>
&lt;/blockquote>
&lt;/li>
&lt;li>
&lt;p>The client replies with &lt;code>ACK&lt;/code> packet.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2011.30.19.png" alt="截屏2021-03-15 11.30.19" style="zoom:67%;" />
&lt;blockquote>
&lt;p>Client: Hi, server, do you want to talk?&lt;/p>
&lt;p>Server: Yes, I want to talk. Do you want to talk?&lt;/p>
&lt;p>Client: Yes, I want to talk.&lt;/p>
&lt;/blockquote>
&lt;/li>
&lt;/ol>
&lt;h3 id="data-transmission">Data Transmission&lt;/h3>
&lt;p>After three-way handshake, connection is established. And data packets are going to be transferred.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2011.35.37.png" alt="截屏2021-03-15 11.35.37" style="zoom:67%;" />
&lt;p>During data transmission, TCP also guarantees that data is successfully received and resembled in a correct order.&lt;/p>
&lt;h3 id="session-ending">Session Ending&lt;/h3>
&lt;p>After server sends all packets to the client, a four-steps procedure is performed before the connection is closed&lt;/p>
&lt;ol>
&lt;li>
&lt;p>The Server sends &lt;code>FINACK&lt;/code> packet to the client.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2011.38.15.png" alt="截屏2021-03-15 11.38.15" style="zoom:67%;" />
&lt;blockquote>
&lt;p>Server: I am done. Can you hear me?&lt;/p>
&lt;/blockquote>
&lt;/li>
&lt;li>
&lt;p>The client responsed with &lt;code>ACK&lt;/code> package.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2011.39.06.png" alt="截屏2021-03-15 11.39.06" style="zoom:67%;" />
&lt;blockquote>
&lt;p>Server: I am done. Can you hear me?&lt;/p>
&lt;p>Client: Yes, I got your message, I can hear you.&lt;/p>
&lt;/blockquote>
&lt;/li>
&lt;li>
&lt;p>When the client completes download in the webpage, it sends &lt;code>FINACK&lt;/code> to the server&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2012.11.00.png" alt="截屏2021-03-15 12.11.00" style="zoom:67%;" />
&lt;blockquote>
&lt;p>Server: I am done. Can you hear me?&lt;/p>
&lt;p>Client: Yes, I got your message, I can hear you.&lt;/p>
&lt;p>Client: I am done. Can you hear me?&lt;/p>
&lt;/blockquote>
&lt;/li>
&lt;li>
&lt;p>The server responses with &lt;code>ACK&lt;/code>&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2011.41.18.png" alt="截屏2021-03-15 11.41.18" style="zoom:67%;" />
&lt;/li>
&lt;/ol>
&lt;p>After this, the session between them can be properly close, unless the client continues to ask for another webpage.&lt;/p>
&lt;h2 id="tcp-three-way-handshake-in-detail">TCP Three-way Handshake in Detail&lt;/h2>
&lt;p>Suppose the client wants to get web pages from the server. Before any web page transmission, TCP connection must be established through &lt;strong>three-way handshake&lt;/strong>.&lt;/p>
&lt;ol>
&lt;li>
&lt;p>The client sends &lt;code>SYN&lt;/code> segment to the server, asking for synchronization (synchronization means connection)&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2012.42.14.png" alt="截屏2021-03-15 12.42.14" style="zoom:67%;" />
&lt;/li>
&lt;li>
&lt;p>The server replies with &lt;code>SYN-ACK&lt;/code> (synchronization and acknowledgement)&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2012.43.43.png" alt="截屏2021-03-15 12.43.43" style="zoom:67%;" />
&lt;ul>
&lt;li>The server acknowledges the client&amp;rsquo;s connection request&lt;/li>
&lt;li>The server also asks the client to open a connection too.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>The client replies &lt;code>ACK&lt;/code>, which is like &amp;ldquo;Yes&amp;rdquo;. Then the two-way connection is established between them.&lt;/p>
&lt;/li>
&lt;/ol>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/3_time_handshake.gif" alt="3_time_handshake" style="zoom:67%;" />
&lt;h3 id="more-technical-view">More Technical View&lt;/h3>
&lt;ol>
&lt;li>
&lt;p>The client sends a &lt;code>SYN&lt;/code> segment with the initial sequence number &lt;code>9001&lt;/code>&lt;/p>
&lt;ul>
&lt;li>&lt;code>ACK&lt;/code> is set to 0&lt;/li>
&lt;li>&lt;code>SYN&lt;/code> is set to 1&lt;/li>
&lt;/ul>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-03-15%2012.50.33.png" alt="截屏2021-03-15 12.50.33">&lt;/p>
&lt;/li>
&lt;li>
&lt;p>The server replies with &lt;code>SYN-ACK&lt;/code>&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-03-15%2012.52.59.png" alt="截屏2021-03-15 12.52.59">&lt;/p>
&lt;ul>
&lt;li>The server&amp;rsquo;s &lt;code>SYN&lt;/code> is set to 1&lt;/li>
&lt;li>&lt;code>ACK&lt;/code> is set to 9002, which is the client&amp;rsquo;s sequence number plus 1.
&lt;ul>
&lt;li>By adding 1 to the client&amp;rsquo;s sequence number, the server simply acknowledges the client connection request)&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>The server&amp;rsquo;s segment has its own initial sequence number 5001&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>The client responses&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-03-15%2012.56.22.png" alt="截屏2021-03-15 12.56.22">&lt;/p>
&lt;ul>
&lt;li>&lt;code>SYN&lt;/code> is set to 0 : There is NO more synchronization/connection request)&lt;/li>
&lt;li>&lt;code>ACK&lt;/code> is set to 5002: The client acknowledges the server connection request by increasing the server-side sequence number by 1 (5001 + 1 = 5002)&lt;/li>
&lt;li>&lt;code>Seq&lt;/code> is set to 9002: This is the second segment issued by the client at this point (9001 + 1 = 9002)&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;p>At this point, both client and server have agreed to open their connection to each other.&lt;/p>
&lt;ul>
&lt;li>Step 1 and 2: establish the connection from client to server&lt;/li>
&lt;li>Step 2 and 3: establish the connection from server to client&lt;/li>
&lt;/ul>
&lt;p>$\rightarrow$ Two-way connection channel is established and they are ready to exchange their messages.&lt;/p>
&lt;h2 id="tcp-vs-udp">TCP vs. UDP&lt;/h2>
&lt;p>&lt;strong>TCP&lt;/strong> (&lt;strong>T&lt;/strong>ransmission &lt;strong>C&lt;/strong>ontrol &lt;strong>P&lt;/strong>rotocol) and &lt;strong>UDP&lt;/strong> (&lt;strong>U&lt;/strong>ser &lt;strong>D&lt;/strong>atagram &lt;strong>P&lt;/strong>rotocol) are two protocols of the Transport layer (Layer 4) in the OSI model.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/TCP_UDP.gif" alt="TCP_UDP" style="zoom:67%;" />
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>&lt;/th>
&lt;th>TCP&lt;/th>
&lt;th>UDP&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>&lt;a href="#reliablity">Reliable&lt;/a>&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>No&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;a href="#connection">Connection&lt;/a>&lt;/td>
&lt;td>connection-oriented&lt;/td>
&lt;td>connectionless&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Character&lt;/td>
&lt;td>reliable&lt;/td>
&lt;td>faster, more efficient&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Usage&lt;/td>
&lt;td>dominant transport protocol&lt;/td>
&lt;td>&lt;li> live streaming audio/video &lt;li>DNS queries, DHCP or VoIP &lt;li> Only a few applications use UDP&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h3 id="reliablity">Reliablity&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>When &lt;strong>TCP&lt;/strong> delivers data segments to destination, the protocol makes sure&lt;/p>
&lt;ul>
&lt;li>each segement is received,&lt;/li>
&lt;li>no error,&lt;/li>
&lt;li>and segments are put together in a correct order&lt;/li>
&lt;/ul>
&lt;p>$\rightarrow$ TCP is reliable 👍&lt;/p>
&lt;/li>
&lt;li>
&lt;p>When &lt;strong>UDP&lt;/strong> delivers data segments to destination, it does NOT guarantee, or even NOT care if the data reach the destination. Once the data is sent off, UDP says &amp;ldquo;Goodbye, and good luck!&amp;rdquo;&lt;/p>
&lt;p>$\rightarrow$ UDP is NOT reliable 🤪&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h3 id="connection">Connection&lt;/h3>
&lt;p>TCP: connection-oriented&lt;/p>
&lt;ul>
&lt;li>TCP uses three-way handshake to make sure the connection is established before data transmission.&lt;/li>
&lt;/ul>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2012.02.03.png" alt="截屏2021-03-15 12.02.03" style="zoom:67%;" />
&lt;ul>
&lt;li>
&lt;p>After data is delivered, TCP will follow a 4-step procedure to make sure every bit of data is delivered and received before clossing the connection.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2012.02.30.png" alt="截屏2021-03-15 12.02.30" style="zoom:67%;" />
&lt;/li>
&lt;/ul>
&lt;p>UDP: connectionless&lt;/p>
&lt;p>​ &lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-15%2012.05.16.png" alt="截屏2021-03-15 12.05.16" style="zoom:67%;" />&lt;/p>
&lt;ul>
&lt;li>No handshake to establish the connection&lt;/li>
&lt;li>No procedure to close the connection&lt;/li>
&lt;/ul>
&lt;h3 id="example-for-understanding-tcp-and-udp">Example for Understanding TCP and UDP&lt;/h3>
&lt;p>TCP walks to a bar&lt;/p>
&lt;blockquote>
&lt;p>TCP: I want a beer.&lt;/p>
&lt;p>Bartender: You want a beer?&lt;/p>
&lt;p>TCP: Yes, I want a beer.&lt;/p>
&lt;/blockquote>
&lt;p>UDP walks to a bar&lt;/p>
&lt;blockquote>
&lt;p>UDP: I want a beer. (He does NOT care if the bartender hears him or not)&lt;/p>
&lt;/blockquote>
&lt;p>UDP might never get a beer&amp;hellip;Well, he&amp;rsquo;s UDP. He doesn&amp;rsquo;t care.&lt;/p>
&lt;h2 id="tcp-details">TCP Details&lt;/h2>
&lt;h3 id="round-trip-time-rtt">Round Trip Time (RTT)&lt;/h3>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/round_trip_time.png" alt="round_trip_time" style="zoom:80%;" />
&lt;h3 id="stop-and-wait-arq-protocol">Stop-and-Wait ARQ Protocol&lt;/h3>
&lt;ul>
&lt;li>After transmitting one frame, the sender waits for an acknowledgement before transmitting the next frame&lt;/li>
&lt;li>If the acknowledgement does NOT arrive after a certain period of time, the sender times out and retransmits the original frame&lt;/li>
&lt;/ul>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-04-01%2015.47.23.png" alt="截屏2021-04-01 15.47.23" style="zoom:67%;" />
&lt;ul>
&lt;li>Drawbacks
&lt;ul>
&lt;li>One frame at a time&lt;/li>
&lt;li>Poor utilization of bandwidth&lt;/li>
&lt;li>Poor performance&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;h3 id="sliding-window-protocol">Sliding Window Protocol&lt;/h3>
&lt;ul>
&lt;li>Send multiple frames at a time&lt;/li>
&lt;li>Number of frames to be sent is based on &lt;strong>Window size&lt;/strong>
&lt;ul>
&lt;li>Window size = # frames that can be sent before expecting ACK&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Each frame is numbered (called &lt;strong>sequence number&lt;/strong>)&lt;/li>
&lt;/ul>
&lt;p>Example&lt;/p>
&lt;ul>
&lt;li>
&lt;p>The sender wants to send 11 frames (Frame 0 to 10)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Window size is set to 4&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>The receiver sends 4 frames at the same time&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-04-01%2015.50.54.png" alt="截屏2021-04-01 15.50.54">&lt;/p>
&lt;p>Now the receiver sends back an ACK for frame 0. Once frame 0 is acknowledged, the sender can send frame 4. Now look at the sliding window, it slides a little bit to the left.&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-04-01%2015.52.17.png" alt="截屏2021-04-01 15.52.17">&lt;/p>
&lt;p>Now frame 2 is acknowledged.&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-04-01%2015.55.17.png" alt="截屏2021-04-01 15.55.17">&lt;/p>
&lt;p>The process works similarly. Frame 0 and 1 is acknowledged. Frame 2 - 5 are in the sliding window, meaning that they&amp;rsquo;re already sent, but not acknowledged.&lt;/p>
&lt;p>Summary:&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2021-04-01%2015.58.26-20210401155858796.png" alt="截屏2021-04-01 15.58.26">&lt;/p>
&lt;blockquote>
&lt;p>Reference: Sliding Window Protocol&lt;/p>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/LnbvhoxHn8M?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;/blockquote>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;h4 id="how-tcp-starts-and-close-session">How TCP starts and close session?&lt;/h4>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/zlIHLnOigmA?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;h4 id="three-time-handshake">Three-time handshake&lt;/h4>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/xMtP5ZB3wSk?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;h4 id="tcp-vs-udp-1">TCP vs. UDP&lt;/h4>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/SLY4Ud53UGs?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div></description></item><item><title>Ethernet Basics</title><link>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/ethernet/</link><pubDate>Mon, 15 Mar 2021 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/ethernet/</guid><description>&lt;h2 id="csmacd">CSMA/CD&lt;/h2>
&lt;p>&lt;strong>CSMA/CD&lt;/strong> = &lt;strong>C&lt;/strong>arrier &lt;strong>S&lt;/strong>ense &lt;strong>M&lt;/strong>ultiple &lt;strong>A&lt;/strong>ccess with &lt;strong>C&lt;/strong>ollision &lt;strong>D&lt;/strong>etection&lt;/p>
&lt;ul>
&lt;li>Media access control method used in early Ethernet technology&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>C&lt;/strong>arrier &lt;strong>S&lt;/strong>ense &lt;strong>M&lt;/strong>ultiple &lt;strong>A&lt;/strong>ccess&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Carrier&lt;/strong>: transmission medium that carries data, e.g.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Electronic bus in Ethernet network&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-18%2010.28.05.png" alt="截屏2021-03-18 10.28.05" style="zoom:67%;" />
&lt;/li>
&lt;li>
&lt;p>Band of the electronmagnetic spectrum (channel) in Wi-Fi network&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2010.28.43.png" alt="截屏2021-03-18 10.28.43" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Carrier Sense&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A node (i.e. Network Interface Card, NIC) on a network has a sense: it can listen and hear. It can detect what is going on over the transmission medium.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Multiple Access&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Every node in the network has equal right to access to and use the shared medium, but they must take turns&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Putting them together, CSMA means Before a node transmits data, it checks or listens to the medium&lt;/p>
&lt;ul>
&lt;li>Medium not busy ➡️ the node sends its data&lt;/li>
&lt;li>When the node detects the medium is used ➡️ It will back off and wait for a random amount of time and try again&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>C&lt;/strong>ollision &lt;strong>D&lt;/strong>etection: A node can hear collision if it happens&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Example&lt;/p>
&lt;p>Both A and C want to transmit their data. They check the media and find it is not busy. So they send their message at the same time. Collision occurs. When these two nodes hear the collision, they will back off and use some kind of randomization to decide which would go first in order to avoid collision.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/CSMA_CD.gif" alt="CSMA_CD" style="zoom:80%;" />
&lt;/li>
&lt;/ul>
&lt;h2 id="ethernet-frame">Ethernet Frame&lt;/h2>
&lt;p>&lt;strong>Frame = a protocol data unit (PDU)&lt;/strong>&lt;/p>
&lt;p>PDU in different layer of the OSI model is named differently.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2010.49.23.png" alt="截屏2021-03-18 10.49.23" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Among the Ethernet family, frames can be different. For any two devices to communicate, they must have the same type of frames.&lt;/p>
&lt;p>An Ethernet frames has seven main parts:&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2010.51.21.png" alt="截屏2021-03-18 10.51.21" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Preamble&lt;/strong>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2011.01.47.png" alt="截屏2021-03-18 11.01.47" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>A 64 bit header information telling the receiving node that a frame is coming and where the frame starts&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Recipient MAC&lt;/strong>&lt;/p>
&lt;p>Recipient&amp;rsquo;s MAC address&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Sender MAc&lt;/strong>&lt;/p>
&lt;p>Sender&amp;rsquo;s MAC address&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Type&lt;/strong>&lt;/p>
&lt;p>Tells the recipient the basic type of data, such as IPv4 or IPv6&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Data&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Payload carried by frame, such as IP packet from Network layer.&lt;/li>
&lt;li>Limit is 1500 Bytes&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Pad&lt;/strong>&lt;/p>
&lt;p>Extra bits to make a frame at least bigger than 64 Bytes (Any data unit &amp;lt; 64 Bytes would be considered as collisions)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>FCS&lt;/strong> = Frame Check Sequence&lt;/p>
&lt;p>Used for error checking and the integrity verfication of a frame&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="spanning-tree-protocol-stp">Spanning Tree Protocol (STP)&lt;/h2>
&lt;h3 id="complete-graph-and-spanning-tree">Complete Graph and Spanning Tree&lt;/h3>
&lt;p>&lt;strong>Complete Graph&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A graph in which each pair of graph vertices is connected by a line&lt;/p>
&lt;ul>
&lt;li>I.e., when all the points are connected by the &lt;strong>maximum&lt;/strong> number of lines, we get a complete graph&lt;/li>
&lt;/ul>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-18%2011.11.43.png" alt="截屏2021-03-18 11.11.43" style="zoom:67%;" />
&lt;/li>
&lt;li>
&lt;p>In networking field, a complete graph is like a fully meshed network&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Spanning tree&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>All points are connected by a &lt;strong>minimum&lt;/strong> number of lines&lt;/li>
&lt;/ul>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-18%2011.13.12.png" alt="截屏2021-03-18 11.13.12" style="zoom:67%;" />
&lt;ul>
&lt;li>
&lt;p>From the complete graph above, we can get three spanning trees:&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-18%2011.15.51.png" alt="截屏2021-03-18 11.15.51" style="zoom:67%;" />
&lt;p>All three points are connected and no loop is formed.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Basic features&lt;/p>
&lt;ul>
&lt;li>NO loop&lt;/li>
&lt;li>Minimumly connected (i.e., removing one line will leave some point disconnected)&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;h3 id="spanning-tree-protocol">Spanning Tree Protocol&lt;/h3>
&lt;p>Spanning Tree Protocol (STP)&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Layer 2 protocol&lt;/strong> that runs on bridges and switches and builds a loop-free logical topology.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>🎯 Main purpose: &lt;strong>eliminate loops&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Three basic steps:&lt;/p>
&lt;ol>
&lt;li>Select one switch as root bridge (central point on the network)&lt;/li>
&lt;li>Choose the shortest path (the least cost) from a switch to the root bridge
&lt;ul>
&lt;li>Path cost is calculated based on link bandwidth: the higher bandwidth, the lower the path cost.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Block links that cause loops while maintaining these links as backups&lt;/li>
&lt;/ol>
&lt;/li>
&lt;/ul>
&lt;details>
&lt;summary>&lt;b>Example&lt;/b>&lt;/summary>
&lt;p>Suppose we have a simple network&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2011.27.53.png" alt="截屏2021-03-18 11.27.53" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>First, STP elects the root bridge. The lowest bridge ID (priority: MAC address) determins the root bridge. Here swithc A is elected as the root bridge.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2011.29.12.png" alt="截屏2021-03-18 11.29.12" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Next each of other switches chooses the path to the root bridge with least path cost. Here we just skip the details of calculation and mark the path cost for each link.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2011.31.34.png" alt="截屏2021-03-18 11.31.34" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Now let&amp;rsquo;s take a look at switch B. For switch B, there&amp;rsquo;re two paths to reach root bridge, switch A:&lt;/p>
&lt;ul>
&lt;li>BDCA: costs 7 (2 + 4 + 1)&lt;/li>
&lt;li>BA: costs 2&lt;/li>
&lt;/ul>
&lt;p>Therefore, the link BA is chosen as the path from switch B to root bridge A.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2011.34.57.png" alt="截屏2021-03-18 11.34.57" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;ul>
&lt;li>RP: Root port, the port with the least cost path to the root bridge&lt;/li>
&lt;li>DP: designated port.&lt;/li>
&lt;/ul>
&lt;p>For switch C and D, the procedure is similar.
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2011.35.59.png" alt="截屏2021-03-18 11.35.59" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Note&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A non-root switch can have many designated ports, but it can have only ONE root port.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>All ports of the root bridge are designated ports. On the root bridge, there is NO root port.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>Now every switch has found the best path to reach the root bridge. And the links between D and C should be blocked in order to eliminate a loop.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2011.43.21.png" alt="截屏2021-03-18 11.43.21" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Let&amp;rsquo;s look at the blocked link DC&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2011.43.35.png" alt="截屏2021-03-18 11.43.35" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>The port with the lowest switch ID would be selected as the designated port. The other end is blocking port. The blocking port can still receive frames, but it will not forward or send frames. It simply drops them.&lt;/p>
&lt;/details>
&lt;h3 id="how-stp-elects-root-bridge">How STP Elects Root Bridge?&lt;/h3>
&lt;p>Root bridge election is based on a 8 byte &lt;strong>switch Bridge ID (BID)&lt;/strong>&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2021-03-18%2016.35.46.png" alt="截屏2021-03-18 16.35.46" style="zoom:67%;" />
&lt;ul>
&lt;li>2 Bytes &lt;strong>Priority Field&lt;/strong>&lt;/li>
&lt;li>6 Bytes &lt;strong>Switch MAC address&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Root bridge election process is simple: &lt;strong>All interconnected switches exchange their BIDs&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Whoever has the &lt;strong>lowest priority field value&lt;/strong> would become the root bridge&lt;/li>
&lt;li>If priority filed is equal, whoever has the &lt;strong>lowest MAC address&lt;/strong> would become the root bridge&lt;/li>
&lt;/ul>
&lt;h4 id="bpdu">BPDU&lt;/h4>
&lt;p>Every switch multicasts its message, &lt;strong>Hello BPDU&lt;/strong>, in which each swich declares ifself the root bridge.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/BPDU.gif" alt="BPDU" style="zoom:67%;" />
&lt;p>&lt;strong>B&lt;/strong>ridge &lt;strong>P&lt;/strong>rotocol &lt;strong>D&lt;/strong>ata &lt;strong>U&lt;/strong>nit (&lt;strong>BPDU&lt;/strong>) is a frame containing information about spanning ree protocol.&lt;/p>
&lt;p>&lt;strong>Hello BPDU&lt;/strong> is used by switches or bridges to share information about themselves. It is used for&lt;/p>
&lt;ul>
&lt;li>electing a root bridge&lt;/li>
&lt;li>determining ports roles and states&lt;/li>
&lt;li>blocking unwanted links&lt;/li>
&lt;/ul>
&lt;p>In other words, Hello BPDU is used to configure a loop-free network.&lt;/p>
&lt;p>&lt;strong>Structure of BPDU:&lt;/strong>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2016.52.57.png" alt="截屏2021-03-18 16.52.57" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Three important fields:&lt;/p>
&lt;ul>
&lt;li>Root ID: Root Bridge BID&lt;/li>
&lt;li>Root Path cost: The best path cost to the root bridges&lt;/li>
&lt;li>Bridge ID: BPDU sender&amp;rsquo;s ID (Source BID)&lt;/li>
&lt;/ul>
&lt;h4 id="the-election-process">The Election Process&lt;/h4>
&lt;p>One thing need to be kept in mind: Each port of a switch is uniquely identified.&lt;/p>
&lt;p>Consider the example above:&lt;/p>
&lt;p>&lt;strong>Switch A, B, and C send out their Hello BPDUs. Basically each switch declares itself the root bridge.&lt;/strong>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2017.04.04.png" alt="截屏2021-03-18 17.04.04" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Let&amp;rsquo;s take a look at Switch A first.&lt;/p>
&lt;p>Switch A sends out its Hello BPDU to B and C&lt;/p>
&lt;ul>
&lt;li>Switch A sets it Root ID to its own BID (&lt;em>&amp;ldquo;Hello everyone, I am the root bridge&amp;rdquo;&lt;/em>)&lt;/li>
&lt;li>Path cost value is set to 0&lt;/li>
&lt;/ul>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2017.05.08.png" alt="截屏2021-03-18 17.05.08" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Switch B and C do the same thing.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2017.07.52.png" alt="截屏2021-03-18 17.07.52" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2017.08.26.png" alt="截屏2021-03-18 17.08.26" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Basically they all claim they are the root bridge (&amp;ldquo;the boss&amp;rdquo;) in their Hello BPDUs.&lt;/p>
&lt;p>The problem is: ONLY one can be the root bridge.&lt;/p>
&lt;p>What they do next is to &lt;strong>compare their Hello BPDUs and to elect a real boss.&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>When Switch A receives Hello BPDUs from B and C, it checks and discards their BPDUs because its bridge ID is lower than B&amp;rsquo;s and C&amp;rsquo;s. So A keeps its original Hello BPDU and still believes it is the root bridge.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/BPDA_Example_A.gif" alt="BPDA_Example_A" style="zoom:80%;" />
&lt;/li>
&lt;li>
&lt;p>When Switch B receives the Hello BPDU from C, it compares and finds its Bridge ID is lower (i.e. B&amp;rsquo;s BPDU is superior) thus discards C&amp;rsquo;s Hello BPDU&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/BPDU_Example_BC.gif" alt="BPDU_Example_BC" style="zoom:80%;" />
&lt;p>When B receives A&amp;rsquo;s Hello BPDU, it finds A&amp;rsquo;s Bridge ID is lower. It would say &amp;ldquo;Well, Switch A is the winner&amp;rdquo;.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/BPDU_Example_BA.gif" alt="BPDU_Example_BA" style="zoom:80%;" />
&lt;p>Therefore, it modifies its Root ID value by replacing its own bridge ID with Switch A&amp;rsquo;s bridge Id. It also calculates the path cost to switch A (let&amp;rsquo;s say 4), and then sends the modified Hello BPDU to others.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/BPDU_Example_BA2.gif" alt="BPDU_Example_BA2" style="zoom:80 %;" />
&lt;/li>
&lt;li>
&lt;p>When Switch C receives Hello BPDU from A and B, C finds A&amp;rsquo;s is a superior BPDU. So C changes the value of the root ID to switch A&amp;rsquo;s Bridge ID. And it calculates the path cost to switch A (let&amp;rsquo;s say 1). Then it sends its modified Hello BPDU to others.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/BPDU_Example_CA.gif" alt="BPDU_Example_CA" style="zoom:80%;" />
&lt;/li>
&lt;/ul>
&lt;p>This way, A, B, and C exchange their BPDUs again and agree that the root bridge should be switch A.&lt;/p>
&lt;p>Once the root bridge is decided, path cost to the root bridges are calculated. Root ports, designated ports, and blocked ports are determined. STP has created a loop-free network! 👏&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%e6%88%aa%e5%b1%8f2021-03-18%2017.38.51.png" alt="截屏2021-03-18 17.38.51" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;h4 id="csmacd-1">CSMA/CD&lt;/h4>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/K_8KJRhOWIA?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;h4 id="7-part-of-an-ethernet-frame">7 Part of an Ethernet Frame&lt;/h4>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/qXtS1o1HGso?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;h4 id="spanning-tree-protocol-ieee-802-1d">Spanning Tree Protocol (IEEE 802 1D)&lt;/h4>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/Ilpmn-H8UgE?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/BkGEwrzIK4g?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"
>&lt;/iframe>
&lt;/div></description></item><item><title>IP Address &amp; Subnet</title><link>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/ip_addr_subnet/</link><pubDate>Thu, 01 Apr 2021 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/telematics/understanding/ip_addr_subnet/</guid><description>&lt;p>Let&amp;rsquo;s take &lt;code>10.0.0.0/8&lt;/code> as an example.&lt;/p>
&lt;p>The address &lt;code>10.0.0.0/8&lt;/code> comprises of two parts&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>IP address (&lt;code>10.0.0.0&lt;/code>&lt;/strong>)&lt;/p>
&lt;p>the global addressing scheme used under Internet Protocol&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Subnet or IP block (&lt;code>/8&lt;/code>)&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>divide the IP addresses into small blocks/ranges.&lt;/li>
&lt;li>The &amp;ldquo;/&amp;rdquo; notation along with the number is called &lt;strong>prefix&lt;/strong>.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>Calculation of IP range&lt;/p>
&lt;ul>
&lt;li>The &amp;ldquo;/ &amp;quot; notation after the IP address can be used to calculate the IP address range that falls under that category.&lt;/li>
&lt;li>All you have to do is &lt;strong>subtract&lt;/strong> the prefix from the number 32 (As IP addresses is a 32 bit number). Put the result as an exponent of 2 and you will get the number of IPs in that range.&lt;/li>
&lt;/ul>
&lt;p>For example&lt;/p>
&lt;ul>
&lt;li>to find the IP range of &amp;ldquo;/8&amp;rdquo; prefix, we subtract The prefix 8 from 32. The result 24 is used as an exponent of 2. Hence, the IP range you get is &amp;ldquo;2 to the power 24&amp;rdquo; i.e 16777216 IPs.&lt;/li>
&lt;li>Thus, &amp;ldquo;&lt;code>10.0.0.0/8&lt;/code>&amp;rdquo; refers to an IP block ranging from &amp;ldquo;&lt;code>10.0.0.0&lt;/code>&amp;rdquo; to &amp;ldquo;&lt;code>10.255.255.255&lt;/code>&amp;rdquo;.&lt;/li>
&lt;/ul></description></item><item><title>Understanding</title><link>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/</link><pubDate>Wed, 22 Jun 2022 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/</guid><description/></item><item><title>Kalman Filter</title><link>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/kalman_filter/</link><pubDate>Fri, 24 Jun 2022 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/kalman_filter/</guid><description>&lt;p>The Kalman filter is an efficient &lt;em>recursive&lt;/em> filter estimating the internal-state of a &lt;a href="https://en.wikipedia.org/wiki/Linear_dynamical_system">linear dynamic system&lt;/a> from a series of noisy measurements.&lt;/p>
&lt;p>Applications of Kalman filter include&lt;/p>
&lt;ul>
&lt;li>Guidance&lt;/li>
&lt;li>Navigation&lt;/li>
&lt;li>Control of vehicles, aircraft, spacecraft, and ships positioned dynamically&lt;/li>
&lt;/ul>
&lt;p>💡 The basic idea of Kalman filter is to &lt;strong>achieve the optimal estimate of the (hidden) internal state by weightedly combining the state prediction and the measurement&lt;/strong>.&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-01%2023.04.50.png" alt="截屏2022-07-01 23.04.50">&lt;/p>
&lt;h2 id="kalman-filter-summary">Kalman Filter Summary&lt;/h2>
&lt;p>Kalman filter in a picture:&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/kalman_filter-Kalman_filter_summary.drawio.png" alt="kalman_filter-Kalman_filter_summary.drawio">&lt;/p>
&lt;p>Summary of equations:&lt;/p>
&lt;style type="text/css">
.tg {border-collapse:collapse;border-color:#ccc;border-spacing:0;}
.tg td{background-color:#fff;border-color:#ccc;border-style:solid;border-width:1px;color:#333;
font-family:Arial, sans-serif;font-size:14px;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{background-color:#f0f0f0;border-color:#ccc;border-style:solid;border-width:1px;color:#333;
font-family:Arial, sans-serif;font-size:14px;font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top}
.tg .tg-fymr{border-color:inherit;font-weight:bold;text-align:left;vertical-align:top}
&lt;/style>
&lt;table class="tg">
&lt;thead>
&lt;tr>
&lt;th class="tg-0pky">&lt;/th>
&lt;th class="tg-fymr">Equation&lt;/th>
&lt;th class="tg-fymr">Equation Name&lt;/th>
&lt;th class="tg-fymr">Alternative Names&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td class="tg-fymr" rowspan="2">Predict&lt;/td>
&lt;td class="tg-0pky">$\hat{\boldsymbol{x}}_{n, n-1}=\mathbf{F} \hat{\boldsymbol{x}}_{n-1, n-1} + \mathbf{G} \boldsymbol{u}_{n}$&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">State Extrapolation&lt;/span>&lt;/td>
&lt;td class="tg-0pky">Predictor Equation&lt;br>Transition Equation&lt;br>Prediction Equation&lt;br>Dynamic Model&lt;br>State Space Model&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0pky">$\mathbf{P}_{n, n-1}=\mathbf{F} \mathbf{P}_{n-1, n-1} \mathbf{F}^{T}+\mathbf{Q}$&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Covariance Extrapolation&lt;/span>&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:400;font-style:normal">Predictor Covariance Equation&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-fymr" rowspan="3">Update&lt;/td>
&lt;td class="tg-0pky">$\mathbf{K}_{n}=\mathbf{P}_{n, n-1} \mathbf{H}^{T}\left(\mathbf{H} \mathbf{P}_{n, n-1} \mathbf{H}^{T}+\mathbf{R}_{n}\right)^{-1}$&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Kalman Gain&lt;/span>&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Weight Equation&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0pky">$\hat{\boldsymbol{x}}_{\boldsymbol{n}, \boldsymbol{n}}=\hat{\boldsymbol{x}}_{\boldsymbol{n}, \boldsymbol{n}-1}+\mathbf{K}_{\boldsymbol{n}}\left(\boldsymbol{z}_{n}-\mathbf{H} \hat{\boldsymbol{x}}_{\boldsymbol{n}, \boldsymbol{n}-1}\right)$&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">State Update&lt;/span>&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Filtering Equation&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0pky">$\mathbf{P}_{n, n}=\left(\mathbf{I}-\mathbf{K}_{n} \mathbf{H}\right) \mathbf{P}_{n, n-1}$&lt;/td>
&lt;td class="tg-0pky">Covariance Update&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Corrector Equation&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-fymr" rowspan="4">&lt;span style="font-style:italic">Auxilliary&lt;/span>&lt;/td>
&lt;td class="tg-0pky">$\boldsymbol{z}_{n} = \mathbf{H} \boldsymbol{x}_n + \boldsymbol{v}_n$&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Measurement Equation&lt;/span>&lt;/td>
&lt;td class="tg-0pky">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0pky">$\mathbf{R}_n = E\{\boldsymbol{v}_n \boldsymbol{v}_n^T\}$&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Measurement Uncertainty&lt;/span>&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Measurement Error&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0pky">$\mathbf{Q}_n = E\{\boldsymbol{w}_n \boldsymbol{w}_n^T\}$&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Process Noise Uncertainty&lt;/span>&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Process Noise Error&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0pky">$\mathbf{P}_{n, n}=E\left\{\boldsymbol{e}_{n} \boldsymbol{e}_{n}^{T}\right\}=E\left\{\left(\boldsymbol{x}_{n}-\hat{\boldsymbol{x}}_{n, n}\right)\left(\boldsymbol{x}_{n}-\hat{\boldsymbol{x}}_{n, n}\right)^{T}\right\}$&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Estimation Uncertainty&lt;/span>&lt;/td>
&lt;td class="tg-0pky">&lt;span style="font-weight:normal;font-style:normal;text-decoration:none">Estimation Error&lt;/span>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;p>Summary of notations:&lt;/p>
&lt;style type="text/css">
.tg {border-collapse:collapse;border-color:#ccc;border-spacing:0;}
.tg td{background-color:#fff;border-color:#ccc;border-style:solid;border-width:1px;color:#333;
font-family:Arial, sans-serif;font-size:14px;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{background-color:#f0f0f0;border-color:#ccc;border-style:solid;border-width:1px;color:#333;
font-family:Arial, sans-serif;font-size:14px;font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-1wig{font-weight:bold;text-align:left;vertical-align:top}
.tg .tg-0lax{text-align:left;vertical-align:top}
&lt;/style>
&lt;table class="tg">
&lt;thead>
&lt;tr>
&lt;th class="tg-1wig">Term&lt;/th>
&lt;th class="tg-1wig">Name&lt;/th>
&lt;th class="tg-1wig">Alternative Term&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td class="tg-0lax">$\boldsymbol{x}$&lt;/td>
&lt;td class="tg-0lax">State vector&lt;/td>
&lt;td class="tg-0lax">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\boldsymbol{z}$&lt;/span>&lt;/td>
&lt;td class="tg-0lax">Output vector&lt;/td>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\boldsymbol{y}$&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">$\mathbf{F}$&lt;/td>
&lt;td class="tg-0lax">State transition matrix&lt;/td>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\mathbf{\Phi}$, $\mathbf{A}$&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\boldsymbol{u}$&lt;/span>&lt;/td>
&lt;td class="tg-0lax">Input variable&lt;/td>
&lt;td class="tg-0lax">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">$\mathbf{G}$&lt;/td>
&lt;td class="tg-0lax">Control matrix&lt;/td>
&lt;td class="tg-0lax">$\boldsymbol{B}$&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\mathbf{P}$&lt;/td>
&lt;td class="tg-0lax">Estimate uncertainty&lt;/td>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\boldsymbol{\Sigma}$&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\mathbf{Q}$&lt;/td>
&lt;td class="tg-0lax">Process noise uncertainty&lt;/td>
&lt;td class="tg-0lax">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\mathbf{R}$&lt;/span>&lt;/td>
&lt;td class="tg-0lax">Measurement uncertainty&lt;/td>
&lt;td class="tg-0lax">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\boldsymbol{w}$&lt;/span>&lt;/td>
&lt;td class="tg-0lax">Process noise vector&lt;/td>
&lt;td class="tg-0lax">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\boldsymbol{v}$&lt;/span>&lt;/td>
&lt;td class="tg-0lax">Measurement noise vector&lt;/td>
&lt;td class="tg-0lax">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\mathbf{H}$&lt;/span>&lt;/td>
&lt;td class="tg-0lax">Observation matrix&lt;/td>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\mathbf{C}$&lt;/span>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">&lt;span style="font-weight:400;font-style:normal">$\mathbf{K}$&lt;/span>&lt;/td>
&lt;td class="tg-0lax">Kalman Gain&lt;/td>
&lt;td class="tg-0lax">&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td class="tg-0lax">$n$&lt;/td>
&lt;td class="tg-0lax">Discrete time index&lt;/td>
&lt;td class="tg-0lax">$k$&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h2 id="multidimensional-kalman-filter-in-detail">Multidimensional Kalman Filter in Detail&lt;/h2>
&lt;p>A Kalman filter works by a two-phase process, including 5 main equations:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Predict&lt;/strong> phase: produces prediction of the current state, along with thier uncertainties
&lt;ul>
&lt;li>&lt;a href="#state-extrapolation-equation">State extrapolation equation&lt;/a>&lt;/li>
&lt;li>&lt;a href="#covariance-extrapolation-equation">Covariance extrapolation equation&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Update&lt;/strong> phase: checks how good the predicted result fits to the current measurement and refines the state prediction using a weighted average given measurements if necessary.
&lt;ul>
&lt;li>&lt;a href="#kalman-gain-equation">Kalman Gain equation&lt;/a>&lt;/li>
&lt;li>&lt;a href="#state-update-equation">State update equation&lt;/a>&lt;/li>
&lt;li>&lt;a href="#covariance-update-equation">Covariance update equation&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;h3 id="state-extrapolation-equation">State extrapolation equation&lt;/h3>
&lt;p>The Kalman filter assumes that the true state of a system at time step $n$ evolved from the prior state at time step $n-1$ is&lt;/p>
$$
\boldsymbol{x}_n = \mathbf{F} \boldsymbol{x}_{n-1} +\mathbf{G} \boldsymbol{u}_{n} + \boldsymbol{w}_n
$$
&lt;ul>
&lt;li>
&lt;p>$\boldsymbol{x}_{n}$
: &lt;strong>state vector&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\boldsymbol{u}_{n}$
: &lt;strong>control variable&lt;/strong> or &lt;strong>input variable&lt;/strong> - a measurable (deterministic) input to the system&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\boldsymbol{w}_n$
: &lt;strong>process noise&lt;/strong> or disturbance - an unmeasurable input that affects the state&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\mathbf{F}$
: &lt;strong>state transition matrix&lt;/strong> - applies the effect of each system state parameter at time step $n-1$ on the system state at time step $n$&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\mathbf{G}$
: &lt;strong>control matrix&lt;/strong> or &lt;strong>input transition matrix&lt;/strong> (mapping control to state variables)&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>The &lt;strong>state extrapolation equation&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Predicts the next system state, based on the knowledge of the current state&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Extrapolates the state vector from time step $n-1$ to $n$&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Also called&lt;/p>
&lt;ul>
&lt;li>Predictor Equation&lt;/li>
&lt;li>Transition Equation&lt;/li>
&lt;li>Prediction Equation&lt;/li>
&lt;li>Dynamic Model&lt;/li>
&lt;li>State Space Model&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>The general form in a matrix notation&lt;/p>
$$
\hat{\boldsymbol{x}}_{n, n-1}=\mathbf{F} \hat{\boldsymbol{x}}_{n-1, n-1}+\mathbf{G} \boldsymbol{u}_{n}
$$
&lt;ul>
&lt;li>
&lt;p>$\hat{\boldsymbol{x}}_{n, n-1}$
: predicted system state vector at time step $n$&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\hat{\boldsymbol{x}}_{n-1, n-1}$
: estimated system state vector at time step $n-1$&lt;/p>
&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>$\hat{\boldsymbol{x}}_{n, m}$
represents the estimate of $\boldsymbol{x}$ at time step $n$ given observation/measurements up to and including at time $m \leq n$&lt;/p>
&lt;/blockquote>
&lt;/li>
&lt;/ul>
&lt;h4 id="example">Example&lt;/h4>
&lt;ul>
&lt;li>&lt;a href="https://www.kalmanfilter.net/stateextrap.html#ex1">Airplane without control input&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://www.kalmanfilter.net/stateextrap.html#ex2">Airplane with control input&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://www.kalmanfilter.net/stateextrap.html#ex3">Falling object&lt;/a>&lt;/li>
&lt;/ul>
&lt;h3 id="covariance-extrapolation-equation">Covariance extrapolation equation&lt;/h3>
&lt;p>The &lt;strong>covariance extrapolation equation&lt;/strong> extrapolates the uncertainty in our &lt;a href="#state-extrapolation-equation">state prediction&lt;/a>.&lt;/p>
$$
\mathbf{P}_{n, n-1}=\mathbf{F} \mathbf{P}_{n-1, n-1} \mathbf{F}^{T}+\mathbf{Q}
$$
&lt;ul>
&lt;li>
&lt;p>$\mathbf{P}_{n-1, n-1}$
: uncertainty (covariance matrix) of the estimate at time step $n-1$&lt;/p>
$$
\begin{aligned}
\mathbf{P}_{n-1, n-1} &amp;= E\{\underbrace{(\boldsymbol{x}_{n-1, n-1} - \hat{\boldsymbol{x}}_{n-1, n-1})}_{=: \boldsymbol{e}_n} (\boldsymbol{x}_{n-1, n-1} - \hat{\boldsymbol{x}}_{n-1, n-1}) ^T\} \\
&amp; = E\{\boldsymbol{e}_n \boldsymbol{e}_n^T\}
\end{aligned}
$$
&lt;/li>
&lt;li>
&lt;p>$\mathbf{P}_{n, n-1}$
: uncertainty (covariance matrix) of the prediction at time step $n$&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\mathbf{F}$
: state transition matrix&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\mathbf{Q}$
: process noise matrix&lt;/p>
$$
\mathbf{Q}_n = E\{\boldsymbol{w}_n \boldsymbol{w}_n^T\}
$$
&lt;ul>
&lt;li>$\boldsymbol{w}_n$
: process noise vector&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;details class="spoiler " id="spoiler-18">
&lt;summary class="cursor-pointer">Derivation&lt;/summary>
&lt;div class="rounded-lg bg-neutral-50 dark:bg-neutral-800 p-2">
&lt;p>At time step $n$, the Kalman filter assumes&lt;/p>
$$
\boldsymbol{x}\_n = \mathbf{F} \boldsymbol{x}\_{n-1} +\mathbf{G} \boldsymbol{u}\_{n} + \boldsymbol{w}\_n
$$
&lt;p>The prediction of state is&lt;/p>
$$
\hat{\boldsymbol{x}}\_{n, n-1}=\mathbf{F} \hat{\boldsymbol{x}}\_{n-1, n-1}+\mathbf{G} \boldsymbol{u}\_{n}
$$
&lt;p>The difference between $\boldsymbol{x}\_n$ and $\hat{\boldsymbol{x}}\_{n, n-1}$ is&lt;/p>
$$
\begin{aligned}
\boldsymbol{x}\_{n}-\hat{\boldsymbol{x}}\_{n, n-1} &amp;=\mathbf{F} \boldsymbol{x}\_{n-1}+\mathbf{G} \boldsymbol{u}\_{n}+\boldsymbol{w}\_{n}-\left(\mathbf{F} \hat{\boldsymbol{x}}\_{n-1, n-1}+\mathbf{G} \boldsymbol{u}\_{n}\right) \\\\
&amp;=\mathbf{F}\left(\boldsymbol{x}\_{n-1}-\hat{\boldsymbol{x}}\_{n-1, n-1}\right)+\boldsymbol{w}\_{n}
\end{aligned}
$$
&lt;p>The variance associate with the prediction $\hat{\boldsymbol{x}}\_{n, n-1}$ of an unknow true state $\boldsymbol{x}\_n$ is&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/image-20220625180624325.png" alt="image-20220625180624325">&lt;/p>
&lt;p>Noting that the state estimation errors and process noise are uncorrelated:&lt;/p>
$$
E\left\\{\left(\boldsymbol{x}\_{n-1}-\hat{\boldsymbol{x}}\_{n-1, n-1}\right) \boldsymbol{w}\_{n}^{T}\right\\} = E\left\\{\boldsymbol{w}\_{n}\left(\boldsymbol{x}\_{n-1}-\hat{\boldsymbol{x}}\_{n-1, n-1}\right)^{T}\right\\} = 0
$$
&lt;p>Therefore&lt;/p>
$$
\begin{aligned}
\mathbf{P}\_{n, n-1} &amp;=\underbrace{E\left\\{\left(\boldsymbol{x}\_{n-1}-\hat{\boldsymbol{x}}\_{n-1, n-1}\right)\left(\boldsymbol{x}\_{n-1}-\hat{\boldsymbol{x}}\_{n-1, n-1}\right)^{T}\right\\}}\_{=\mathbf{P}\_{n-1, n-1}} \mathbf{F}^{T}+\underbrace{E\left\\{w\_{n} w\_{n}^{T}\right\\}}\_{=\mathbf{Q}} \\\\
&amp;=\mathbf{F} \mathbf{P}\_{n-1, n-1}\mathbf{F}^T+\mathbf{Q}
\end{aligned}
$$
&lt;/div>
&lt;/details>
&lt;h3 id="kalman-gain-equation">Kalman Gain equation&lt;/h3>
&lt;p>The Kalman Gain is calculated so that it minimizes the covariance of the &lt;em>a posteriori&lt;/em> state estimate.&lt;/p>
$$
\mathbf{K}_{n}=\mathbf{P}_{n, n-1} \mathbf{H}^{T}\left(\mathbf{H} \mathbf{P}_{n, n-1} \mathbf{H}^{T}+\mathbf{R}_{n}\right)^{-1}
$$
&lt;ul>
&lt;li>$\mathbf{P}_{n, n-1}$
: &lt;a href="#covariance-extrapolation-equation">uncertainty (covariance) matrix of the current state prediction&lt;/a>&lt;/li>
&lt;li>$\mathbf{H}$
: observation matrix&lt;/li>
&lt;li>$\mathbf{R}_{n}$
: measurement Uncertainty (measurement noise covariance matrix)&lt;/li>
&lt;/ul>
&lt;details class="spoiler " id="spoiler-23">
&lt;summary class="cursor-pointer">Derivation&lt;/summary>
&lt;div class="rounded-lg bg-neutral-50 dark:bg-neutral-800 p-2">
&lt;p>Rearrange the &lt;a href="#covariance-update-equation">covariance update equation&lt;/a>
&lt;/p>
$$
\begin{array}{l}
\mathbf{P}\_{n, n}=\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right) \mathbf{P}\_{n, n-1}{\color{DodgerBlue} \left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}}+\mathbf{K}\_{n} \mathbf{R}\_{n} \mathbf{K}\_{n}^{T} \\\\\\\\
\mathbf{P}\_{n, n}=\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right) \mathbf{P}\_{n, n-1}{\color{DodgerBlue}\left(\mathbf{I}-\left(\mathbf{K}\_{n} \mathbf{H}\right)^{T}\right)}+\mathbf{K}\_{n} \mathbf{R}\_{n} \mathbf{K}\_{n}^{T} \qquad | \text{ } \mathbf{I} = \mathbf{I}^T \\\\\\\\
\mathbf{P}\_{n, n}={\color{ForestGreen}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right) \mathbf{P}\_{n, n-1}}{\color{DodgerBlue}\left(\mathbf{I}-\mathbf{H}^{T} \mathbf{K}\_{n}^{T}\right)}+\mathbf{K}\_{n} \mathbf{R}\_{n} \mathbf{K}\_{n}^{T} \qquad | \text{ } (\mathbf{AB})^T = \mathbf{B}^T \mathbf{A}^T\\\\\\\\
\mathbf{P}\_{n, n}={\color{ForestGreen}\left(\mathbf{P}\_{n, n-1}-\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1}\right)}\left(\mathbf{I}-\mathbf{H}^{T} \mathbf{K}\_{n}^{T}\right)+\mathbf{K}\_{n} \mathbf{R}\_{n} \mathbf{K}\_{n}^{T} \\\\\\\\
\mathbf{P}\_{n, n}=\mathbf{P}\_{n, n-1}-\mathbf{P}\_{n, n-1} \mathbf{H}^{T} \mathbf{K}\_{n}^{T}-\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1} \\\\
+{\color{MediumOrchid}\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T} \mathbf{K}\_{n}^{T}+\mathbf{K}\_{n} \mathbf{R}\_{n} \mathbf{K}\_{n}^{T}} \qquad | \text{ } \mathbf{AB}\mathbf{A}^T + \mathbf{AC}\mathbf{A}^T = \mathbf{A}(\mathbf{B} + \mathbf{C})\mathbf{A}^T
\\\\\\\\
\mathbf{P}\_{n, n}=\mathbf{P}\_{n, n-1}-\mathbf{P}\_{n, n-1} \mathbf{H}^{T} \mathbf{K}\_{n}^{T}-\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1} \\\\
+{\color{MediumOrchid}\mathbf{K}\_{n}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\boldsymbol{\mathbf{R}}\_{n}\right) \mathbf{K}\_{n}^{T}}
\end{array}
$$
&lt;p>As the Kalman Filter is an &lt;strong>optimal filter&lt;/strong>, we will seek a Kalman Gain that minimizes the estimate variance.&lt;/p>
&lt;p>In order to minimize the estimate variance, we need to minimize the main diagonal (from the upper left to the lower right) of the covariance matrix $\mathbf{P}\_{n, n}$
.&lt;/p>
&lt;p>The sum of the main diagonal of the square matrix is the &lt;strong>trace&lt;/strong> of the matrix. Thus, we need to minimize $tr(\mathbf{P}\_{n, n})$
. In order to find the conditions required to produce a minimum, we will differentiate $tr(\mathbf{P}\_{n, n})$
w.r.t. $\mathbf{K}\_n$ and set the result to zero.&lt;/p>
$$
\begin{array}{l}
tr\left(\mathbf{P}\_{\boldsymbol{n}, \boldsymbol{n}}\right)=tr\left(\mathbf{P}\_{\boldsymbol{n}, \boldsymbol{n}-1}\right)-{\color{DarkOrange}tr\left(\mathbf{P}\_{n, n-1} \mathbf{H}^{T} \mathbf{K}\_{n}^{T}\right)}\\\\
{\color{DarkOrange} -tr\left(\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1}\right)} + tr\left(\mathbf{K}\_{\boldsymbol{n}}\left(\mathbf{H} \mathbf{P}\_{\boldsymbol{n}, \boldsymbol{n}-\mathbf{1}} \mathbf{H}^{\boldsymbol{T}}+\mathbf{R}\_{\boldsymbol{n}}\right) \mathbf{K}\_{n}^{\boldsymbol{T}}\right) \qquad | \text{} tr(\mathbf{A}) = tr(\mathbf{A}^T)\\\\\\\\
tr\left(\mathbf{P}\_{n, n}\right)=tr\left(\mathbf{P}\_{n, n-1}\right)-{\color{DarkOrange}2 tr\left(\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1}\right)}\\\\
+tr\left(\mathbf{K}\_{n}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{\boldsymbol{T}}+\mathbf{R}\_{n}\right) \mathbf{K}\_{n}^{T}\right)\\\\\\\\
\frac{d}{d \mathbf{K}\_{n}}t r\left(\mathbf{P}\_{n, n}\right)={\color{DodgerBlue} \frac{d}{d \mathbf{K}\_{n}}t r\left(\mathbf{P}\_{n, n-1}\right)}-{\color{ForestGreen}\frac{d }{d \mathbf{K}\_{n}}2 t r\left(\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1}\right)} \\\\
+{\color{MediumOrchid}\frac{d tr(\mathbf{K}\_{n}(\mathbf{H}\mathbf{P}\_{n, n-1}\mathbf{H}^T + \mathbf{R}\_n)\mathbf{K}\_{n}^T)}{d\mathbf{K}\_{n}}} \overset{!}{=} 0 \quad \mid {\color{ForestGreen} \frac{d}{d \mathbf{A}}tr(\mathbf{A} \mathbf{B}) = \mathbf{B}^T},{\color{MediumOrchid} \frac{d}{d \mathbf{A}}tr(\mathbf{A} \mathbf{B} \mathbf{A}^T) = 2\mathbf{A} \mathbf{B}}\\\\\\\\
\frac{d\left(t r\left(\mathbf{P}\_{n, n}\right)\right)}{d \mathbf{K}\_{n}}={\color{DodgerBlue}0}-{\color{ForestGreen}2\left(\mathbf{H} \mathbf{P}\_{ n , n - 1 }\right)^{T}}\\\\
+{\color{MediumOrchid}2 \mathbf{K}\_{n}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\mathbf{R}\_{n}\right)}=0\\\\\\\\
{\color{ForestGreen}\left(\mathbf{H} \mathbf{P}\_{n, n-1}\right)^{T}}={\color{MediumOrchid}\mathbf{K}\_{n}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\mathbf{R}\_{n}\right)} \\\\\\\\
\mathbf{K}\_{n}=\left(\mathbf{H} \mathbf{P}\_{n, n-1}\right)^{T}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\mathbf{R}\_{n}\right)^{-1} \quad \mid (\mathbf{AB})^T = \mathbf{B}^T \mathbf{A}^T \\\\\\\\
\mathbf{K}\_{n}=\mathbf{P}\_{n, n-1}^{T} \mathbf{H}^{T}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\mathbf{R}\_{n}\right)^{-1} \quad | \text{ covariance matrix } \mathbf{P} \text{ symmetric } (\mathbf{P}^T = \mathbf{P})\\\\\\\\
\mathbf{K}\_{n}=\mathbf{P}\_{n, n-1} \mathbf{H}^{T}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\mathbf{R}\_{n}\right)^{-1}
\end{array}
$$
&lt;/div>
&lt;/details>
&lt;h4 id="kalman-gain-intuition">Kalman Gain intuition&lt;/h4>
&lt;div class="flex px-4 py-3 mb-6 rounded-md bg-primary-100 dark:bg-primary-900">
&lt;span class="pr-3 pt-1 text-primary-600 dark:text-primary-300">
&lt;svg height="24" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="m11.25 11.25l.041-.02a.75.75 0 0 1 1.063.852l-.708 2.836a.75.75 0 0 0 1.063.853l.041-.021M21 12a9 9 0 1 1-18 0a9 9 0 0 1 18 0m-9-3.75h.008v.008H12z"/>&lt;/svg>
&lt;/span>
&lt;span class="dark:text-neutral-300">The Kalman gain tells how much I should refine my prediction (&lt;em>i.e.&lt;/em>, the &lt;em>a priori&lt;/em> estimate) by given a measurement.&lt;/span>
&lt;/div>
&lt;p>We show the intuition of Kalman Gain with a one-dimensional Kalman filter.&lt;/p>
&lt;p>The one-dimensional Kalman Gain is&lt;/p>
$$
\boldsymbol{K}_{\boldsymbol{n}}=\frac{p_{n, n-1}}{p_{n, n-1}+r_{n}} \in [0, 1]
$$
&lt;ul>
&lt;li>$p_{n, n-1}$
: variance of the state prediction $\hat{x}_{n, n-1}$
&lt;/li>
&lt;li>$r_n$
: variance of the measurement $z_n$
&lt;/li>
&lt;/ul>
&lt;p>(Derivation see &lt;a href="https://www.kalmanfilter.net/KalmanGainDeriv.html">here&lt;/a>)&lt;/p>
&lt;p>Let&amp;rsquo;s rewrite the (one-dimensional) &lt;a href="#state-update-equation">state update equation&lt;/a>:&lt;/p>
$$
\hat{x}_{n, n}=\hat{x}_{n, n-1}+K_{n}\left(z_{n}-\hat{x}_{n, n-1}\right)=\left(1-K_{n}\right) \hat{x}_{n, n-1}+K_{n} z_{n}
$$
&lt;p>The Kalman Gain $K_n$ is the &lt;strong>weight&lt;/strong> that we give to the measurement. And $(1 - K_n)$ is the weight that we give to the state prediction.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>High Kalman Gain&lt;/p>
&lt;p>A low measurement uncertainty (small $r_n$) relative to the prediction uncertainty would result in a high Kalman Gain (close to 1). The new estimate would be close to the measurement.&lt;/p>
&lt;blockquote>
&lt;p>💡 Intuition&lt;/p>
&lt;p>small $r_n \rightarrow$ accurate measurements $\rightarrow$ place more weight on the measurements and thus conform to them&lt;/p>
&lt;/blockquote>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/kalman_filter-high_Kalman_Gain.drawio.png" alt="kalman_filter-high_Kalman_Gain.drawio" style="zoom:60%;" />
&lt;/li>
&lt;li>
&lt;p>Low Kalman Gain&lt;/p>
&lt;p>A high measurement uncertainty (large $r_n$) relative to the prediction uncertainty would result in a low Kalman Gain (close to 0). The new estimate would be close to the prediction.&lt;/p>
&lt;blockquote>
&lt;p>💡 Intuition&lt;/p>
&lt;p>large $r_n \rightarrow$ measurements are not accurate $\rightarrow$ place more weight on the prediction and trust them more&lt;/p>
&lt;/blockquote>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/kalman_filter-low_Kalman_Gain.drawio.png" alt="kalman_filter-low_Kalman_Gain.drawio" style="zoom: 60%;" />
&lt;/li>
&lt;/ul>
&lt;h3 id="state-update-equation">State update equation&lt;/h3>
&lt;p>The &lt;strong>state update equation&lt;/strong> updates/refines/corrects the &lt;a href="#state-extrapolation-equation">state prediction&lt;/a> with measurements.&lt;/p>
$$
\hat{\boldsymbol{x}}_{\boldsymbol{n}, \boldsymbol{n}}=\hat{\boldsymbol{x}}_{\boldsymbol{n}, \boldsymbol{n}-1}+\mathbf{K}_{\boldsymbol{n}}\underbrace{\left(\boldsymbol{z}_{n}-\mathbf{H} \hat{\boldsymbol{x}}_{\boldsymbol{n}, \boldsymbol{n}-1}\right)}_{\text{innovation}}
$$
&lt;ul>
&lt;li>
&lt;p>$\hat{\boldsymbol{x}}_{n, n}$
: estimated system state vector at time step $n$&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\hat{\boldsymbol{x}}_{n, n-1}$
: &lt;a href="#state-extrapolation-equation">predicted system state&lt;/a> vector at time step $n$&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\mathbf{K}_{\boldsymbol{n}}$
: &lt;a href="#kalman-gain-equation">Kalman Gain&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\mathbf{H}$
: observation matrix&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\boldsymbol{z}_{n}$
: measurement at time step $n$&lt;/p>
$$
\boldsymbol{z}_{n} = \mathbf{H} \boldsymbol{x}_n + \boldsymbol{v}_n
$$
&lt;ul>
&lt;li>
&lt;p>$\boldsymbol{x}_n$
: true system state (hidden state)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\boldsymbol{v}_n$
: measurement noise&lt;/p>
&lt;p>$\rightarrow$ Measurement uncertainty $\mathbf{R}_n$ is given by&lt;/p>
$$
\mathbf{R}_n = E\{\boldsymbol{v}_n \boldsymbol{v}_n^T\}
$$
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;h3 id="covariance-update-equation">Covariance update equation&lt;/h3>
&lt;p>The &lt;strong>covariance update equation&lt;/strong> updates the uncertainty of state estimate on the base of &lt;a href="#covariance-extrapolation-equation">covariance prediction&lt;/a>&lt;/p>
$$
\mathbf{P}_{n, n}=\left(\mathbf{I}-\mathbf{K}_{n} \mathbf{H}\right) \mathbf{P}_{n, n-1}\left(\mathbf{I}-\mathbf{K}_{n} \mathbf{H}\right)^{T}+\mathbf{K}_{n} \mathbf{R}_{n} \mathbf{K}_{n}^{T}
$$
&lt;ul>
&lt;li>$\mathbf{P}_{n, n}$
: estimate uncertainty (covariance) matrix of the current state&lt;/li>
&lt;li>$\mathbf{P}_{n, n-1}$
: &lt;a href="#covariance-extrapolation-equation">uncertainty (covariance) matrix of the current state prediction&lt;/a>&lt;/li>
&lt;li>$\mathbf{K}_{n}$
: &lt;a href="#kalman-gain-equation">Kalman Gain&lt;/a>&lt;/li>
&lt;li>$\mathbf{H}$
: observation matrix&lt;/li>
&lt;li>$\mathbf{R}_{n}$
: measurement Uncertainty (measurement noise covariance matrix)&lt;/li>
&lt;/ul>
&lt;details class="spoiler " id="spoiler-47">
&lt;summary class="cursor-pointer">Derivation&lt;/summary>
&lt;div class="rounded-lg bg-neutral-50 dark:bg-neutral-800 p-2">
&lt;p>According to &lt;a href="#state-update-equation">state update equation&lt;/a>:&lt;/p>
$$
\begin{aligned}
\hat{\boldsymbol{x}}\_{n, n} &amp;= \hat{\boldsymbol{x}}\_{n, n-1}+\mathbf{K}\_{n}\left(\boldsymbol{z}\_{n}-\mathbf{H} \hat{\boldsymbol{x}}\_{n, n-1}\right) \\\\\\\\
&amp;= \hat{\boldsymbol{x}}\_{n, n-1}+\mathbf{K}\_{n}\left(\mathbf{H} \boldsymbol{x}\_n + \boldsymbol{v}\_n-\mathbf{H} \hat{\boldsymbol{x}}\_{n, n-1}\right)
\end{aligned}
$$
&lt;p>The estimation error between the true (hidden) state $\boldsymbol{x}\_n$
and estimate $\hat{\boldsymbol{x}}\_{n, n}$
is:&lt;/p>
$$
\begin{aligned}
\boldsymbol{e}\_n &amp;= \boldsymbol{x}\_n - \hat{\boldsymbol{x}}\_{n, n} \\\\
&amp;= \boldsymbol{x}\_n - \hat{\boldsymbol{x}}\_{n, n-1} - \mathbf{K}\_{n}\mathbf{H}\boldsymbol{x}\_n - \mathbf{K}\_{n}\boldsymbol{v}\_n + \mathbf{K}\_{n}\mathbf{H} \hat{\boldsymbol{x}}\_{n, n-1}\\\\
&amp;= \boldsymbol{x}\_n - \hat{\boldsymbol{x}}\_{n, n-1} - \mathbf{K}\_{n}\mathbf{H}(\boldsymbol{x}\_n - \hat{\boldsymbol{x}}\_{n, n-1}) - \mathbf{K}\_{n}\boldsymbol{v}\_n \\\\
&amp;= (\mathbf{I} - \mathbf{K}\_{n}\mathbf{H})(\boldsymbol{x}\_n - \hat{\boldsymbol{x}}\_{n, n-1}) - \mathbf{K}\_{n}\boldsymbol{v}\_n
\end{aligned}
$$
&lt;p>Estimate Uncertainty&lt;/p>
$$
\begin{array}{l}
\boldsymbol{\mathbf{P}}\_{n, n}=E\left(\boldsymbol{e}\_{n} \boldsymbol{e}\_{n}^{T}\right)=E\left(\left(\boldsymbol{x}\_{n}-\hat{\boldsymbol{x}}\_{n, n}\right)\left(\boldsymbol{x}\_{n}-\hat{\boldsymbol{x}}\_{n, n}\right)^{T}\right) \qquad | \text{ Plug in } \boldsymbol{e}\_n\\\\\\\\
\boldsymbol{\mathbf{P}}\_{n, n}=E\left(\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)-\mathbf{K}\_{n} v\_{n}\right) \right.\\\\
\left.\times\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)-\mathbf{K}\_{n} v\_{n}\right)^{T}\right)\\\\\\\\
\mathbf{P}\_{n, n}=E\left(\left(\left(\mathbf{I}-\boldsymbol{\mathbf{K}}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{\boldsymbol{x}}\_{n, n-1}\right)-\boldsymbol{\mathbf{K}}\_{n} v\_{n}\right) \right.\\\\
\left.\times\left(\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\right)^{T}-\left(\mathbf{K}\_{n} v\_{n}\right)^{T}\right)\right) \qquad | \text{ }(\mathbf{A} \mathbf{B})^{T}=\mathbf{B}^{T} \mathbf{A}^{T} \\\\\\\\
\mathbf{P}\_{n, n}=E\left(\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)-\mathbf{K}\_{n} v\_{n}\right) \right. \\\\
\left.\times\left(\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)^{T}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}-\left(\mathbf{K}\_{n} v\_{n}\right)^{T}\right)\right)\\\\\\\\
\mathbf{P}\_{n, n}=E\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\left(\boldsymbol{x}\_{n}-\hat{\boldsymbol{x}}\_{n, n-1}\right)^{T}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}\right.\\\\
-\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\left(\mathbf{K}\_{n} v\_{n}\right)^{T}\\\\
-\mathbf{K}\_{n} v\_{n}\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)^{T}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}\\\\
\left.+\mathbf{K}\_{n} v\_{n}\left(\mathbf{K}\_{n} v\_{n}\right)^{T}\right) \qquad | \text{ } E(X \pm Y)=E(X) \pm E(Y)\\\\\\\\
\mathbf{P}\_{n, n}=E\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)^{T}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}\right)\\\\
-\color{red}{E\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\left(\mathbf{K}\_{n} v\_{n}\right)^{T}\right)}\\\\
-\color{red}{E\left(\mathbf{K}\_{n} v\_{n}\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)^{T}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}\right)}\\\\
+E\left(\mathbf{K}\_{n} v\_{n}\left(\mathbf{K}\_{n} v\_{n}\right)^{T}\right)
\end{array}
$$
&lt;p>$\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}$
is the error of the prior estimate in relation to the true value. It is uncorrelated with the current measurement noise $\boldsymbol{v}\_n$. The expectation value of the product of two independent variables is zero.&lt;/p>
$$
\begin{aligned}
\color{red}{E\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\left(\mathbf{K}\_{n} v\_{n}\right)^{T}\right)} = 0 \\\\
\color{red}{E\left(\mathbf{K}\_{n} v\_{n}\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)^{T}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}\right)} = 0
\end{aligned}
$$
&lt;p>Therefore&lt;/p>
$$
\begin{array}{l}
\mathbf{P}\_{n, n}=E\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)^{T}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}\right)\\\\
+{\color{DodgerBlue}{E\left(\mathbf{K}\_{n} v\_{n}\left(\mathbf{K}\_{n} v\_{n}\right)^{T}\right)}} \qquad | \text{ }(\mathbf{A} \mathbf{B})^{T}=\mathbf{B}^{T} \mathbf{A}^{T} \\\\\\\\
\mathbf{P}\_{n, n}=E\left(\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)^{T}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}\right)\\\\
+{\color{DodgerBlue}{E\left(\mathbf{K}\_{n} v\_{n} v\_{n}^T \mathbf{K}\_{n}^T\right)}} \qquad | \text{ } E(a X)=a E(X) \\\\\\\\
\mathbf{P}\_{n, n} = \left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right) {\color{ForestGreen}\underbrace{{E\left(\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)\left(\boldsymbol{x}\_{n}-\hat{x}\_{n, n-1}\right)^{T}\right)}}\_{=\mathbf{P}\_{n, n-1}}}\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T}
+\mathbf{K}\_{n}{\color{DodgerBlue}{\underbrace{E\left( v\_{n} v\_{n}^T \right)}\_{=\mathbf{R}\_n}}} \mathbf{K}\_{n}^T \\\\\\\\
\mathbf{P}\_{n, n} = \left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right) {\color{ForestGreen}\mathbf{P}\_{n, n-1}} \left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T} +\mathbf{K}\_{n}{\color{DodgerBlue}\mathbf{R}\_n} \mathbf{K}\_{n}^T
\end{array}
$$
&lt;/div>
&lt;/details>
&lt;p>In many textbook you can see a simplified form:&lt;/p>
$$
\mathbf{P}_{n, n}=\left(\mathbf{I}-\mathbf{K}_{n} \mathbf{H}\right) \mathbf{P}_{n, n-1}
$$
&lt;div class="flex px-4 py-3 mb-6 rounded-md bg-yellow-100 dark:bg-yellow-900">
&lt;span class="pr-3 pt-1 text-red-400">
&lt;svg height="24" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M12 9v3.75m-9.303 3.376c-.866 1.5.217 3.374 1.948 3.374h14.71c1.73 0 2.813-1.874 1.948-3.374L13.949 3.378c-.866-1.5-3.032-1.5-3.898 0zM12 15.75h.007v.008H12z"/>&lt;/svg>
&lt;/span>
&lt;span class="dark:text-neutral-300">&lt;p>This equation is elegant and easier to remember and in many cases it performs well.&lt;/p>
&lt;p>However, even the smallest error in computing the Kalman Gain (due to round off) can lead to huge computation errors. The subtraction $\left(\mathbf{I}-\mathbf{K}_{n} \mathbf{H}\right)$ can lead to nonsymmetric matrices due to floating-point errors. Therefore this equation is &lt;strong>numerically unstable&lt;/strong>!&lt;/p>
&lt;/span>
&lt;/div>
&lt;details class="spoiler " id="spoiler-50">
&lt;summary class="cursor-pointer">Derivation of a simplified form of the Covariance Update Equation&lt;/summary>
&lt;div class="rounded-lg bg-neutral-50 dark:bg-neutral-800 p-2">
$$
\begin{array}{l}
\mathbf{P}\_{n, n} = \left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right) \mathbf{P}\_{n, n-1} \left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right)^{T} +\mathbf{K}\_{n}\mathbf{R}\_n \mathbf{K}\_{n}^T \\\\\\\\
\mathbf{P}\_{n, n}=\mathbf{P}\_{n, n-1}-\mathbf{P}\_{n, n-1} \boldsymbol{\mathbf{H}}^{T} \mathbf{K}\_{n}^{T}-\boldsymbol{\mathbf{K}}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1} \\\\
+{\color{MediumOrchid}\mathbf{K}\_{n}}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\mathbf{R}\_{n}\right) \mathbf{K}\_{n}^{T} \qquad | \text{ Substitute Kalman Gain}\\\\\\\\
\mathbf{P}\_{n, n}=\mathbf{P}\_{n, n-1}-\mathbf{P}\_{n, n-1} \mathbf{H}^{T} \mathbf{K}\_{n}^{T}-\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1} \\\\
+{\color{MediumOrchid}\mathbf{P}\_{n, n-1}} \underbrace{{\color{MediumOrchid}{\mathbf{H}^{T}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\mathbf{R}\_{n}\right)^{-1}}}\left(\mathbf{H} \mathbf{P}\_{n, n-1} \mathbf{H}^{T}+\mathbf{R}\_{n}\right)}\_{=1} \mathbf{K}\_{n}^{T} \\\\\\\\
\mathbf{P}\_{n, n}=\mathbf{P}\_{n, n-1}-\mathbf{P}\_{n, n-1} \mathbf{H}^{T} \mathbf{K}\_{n}^{T}-\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1} \\\\
+\mathbf{P}\_{n, n-1} \mathbf{H}^{T} \mathbf{K}\_{n}^{T} \\\\\\\\
\mathbf{P}\_{n, n}=\mathbf{P}\_{n, n-1}-\mathbf{K}\_{n} \mathbf{H} \mathbf{P}\_{n, n-1} \\\\\\\\
\mathbf{P}\_{n, n}=\left(\mathbf{I}-\mathbf{K}\_{n} \mathbf{H}\right) \mathbf{P}\_{n, n-1}
\end{array}
$$
&lt;/div>
&lt;/details>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>👍 &lt;a href="https://www.kalmanfilter.net/default.aspx">kalmnnfilter.net&lt;/a>: clear and detaied tutorial for Kalman filter&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://www.kalmanfilter.net/alphabeta.html">The $\alpha-\beta-\gamma$ filter&lt;/a>: detailed introduction to Kalman filter&lt;/li>
&lt;li>&lt;a href="https://www.kalmanfilter.net/kalman1d.html">One-dimensional Kalman filter&lt;/a> with serveral elaborated numerical examples&lt;/li>
&lt;li>&lt;a href="https://www.kalmanfilter.net/kalmanmulti.html">Multidimensional Kalman filter&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>👍 &lt;a href="https://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/#mjx-eqn-kalpredictfull">How a Kalman filter works, in pictures&lt;/a>: Kalman filter explained intuitively in pictures&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://youtube.com/playlist?list=PLn8PRpmsu08pzi6EMiYnR-076Mh-q3tWr">Understanding Kalman Filters&lt;/a>: a series of video tutorials that intuitively explains Kalman Filter step by step&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://nbviewer.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb">Kalman and Bayesian Filters in Python&lt;/a>: Kalman filter (in Python) explained using Jupyter Notebook&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://synapticlab.co.kr/attachment/cfile1.uf@2737C54B590907BA0D46CE.pdf">Understanding the Basis of the Kalman Filter Via a Simple and Intuitive Derivation&lt;/a>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://en.wikipedia.org/wiki/Kalman_filter#Predict">Wikipedia: Kalman Filter&lt;/a>&lt;/p>
&lt;/li>
&lt;/ul></description></item><item><title>Linear Kalman Filter</title><link>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/linear_kf/</link><pubDate>Tue, 19 Jul 2022 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/linear_kf/</guid><description>&lt;h2 id="intuition-example">Intuition Example&lt;/h2>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-19%2015.40.40.png" alt="截屏2022-07-19 15.40.40">&lt;/p>
&lt;p>Estimation of the 1D position of the vehicle.&lt;/p>
&lt;p>Starting from an initial probabilistic estimate at time $k-1$&lt;/p>
&lt;blockquote>
&lt;p>Note: The initial estimate, the predicted state, and the final corrected state are all random variabless that we will specify their means and covariances.&lt;/p>
&lt;/blockquote>
&lt;ol>
&lt;li>Use a motion model to &lt;strong>predict&lt;/strong> our new state&lt;/li>
&lt;li>Use observation model (e.g. derived from GPS) to &lt;strong>correct&lt;/strong> that prediction of vehicle position at time $k$&lt;/li>
&lt;/ol>
&lt;p>In this way, we can think of the Kalman filter as a technique to fuse information from different sensors to produce a final estimate of some unknown state, taking the uncertainty in motion and measurements into account.&lt;/p>
&lt;h2 id="the-linear-dynamical-system">The Linear Dynamical System&lt;/h2>
&lt;p>Motion model:&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-19%2016.07.45.png" alt="截屏2022-07-19 16.07.45" style="zoom:50%;" />
$$
\mathbf{x}_{k}=\mathbf{F}_{k-1} \mathbf{x}_{k-1}+\mathbf{G}_{k-1} \mathbf{u}_{k-1}+\mathbf{w}_{k-1}
$$
&lt;p>where&lt;/p>
&lt;ul>
&lt;li>$\mathbf{u}_k$: control input&lt;/li>
&lt;li>$\mathbf{w}_k$: process/motion noise. $\mathbf{w}_{k} \sim \mathcal{N}\left(\mathbf{0}, \mathbf{Q}_{k}\right)$
&lt;/li>
&lt;/ul>
&lt;p>Measurement model&lt;/p>
$$
\mathbf{y}_{k}=\mathbf{H}_{k} \mathbf{x}_{k}+\mathbf{v}_{k}
$$
&lt;p>where&lt;/p>
&lt;ul>
&lt;li>$\mathbf{v}_{k}$: measurement noise. $\mathbf{v}_{k} \sim \mathcal{N}\left(\mathbf{0}, \mathbf{R}_{k}\right)$
&lt;/li>
&lt;/ul>
&lt;h2 id="kalman-filter-steps">Kalman Filter Steps&lt;/h2>
&lt;h3 id="prediction">Prediction&lt;/h3>
&lt;p>We use the process model to predict how our state evolves since the last time step, and will propagate our uncertainty.&lt;/p>
$$
\begin{array}{l}
\check{\mathbf{x}}_{k}=\mathbf{F}_{k-1} \mathbf{x}_{k-1}+\mathbf{G}_{k-1} \mathbf{u}_{k-1} \\
\check{\mathbf{P}}_{k}=\mathbf{F}_{k-1} \hat{\mathbf{P}}_{k-1} \mathbf{F}_{k-1}^{T}+\mathbf{Q}_{k-1}
\end{array}
$$
&lt;h3 id="correction">Correction&lt;/h3>
&lt;p>We use measurement to correct that prediction&lt;/p>
&lt;blockquote>
&lt;p>Notation:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>$\check{x}_k$: a prediction before the measurement is incorporated&lt;/p>
&lt;/li>
&lt;li>
&lt;p>$\hat{x}_k$: corrected prediction at time step $k$&lt;/p>
&lt;/li>
&lt;/ul>
&lt;/blockquote>
&lt;ul>
&lt;li>
&lt;p>Optimal Gain&lt;/p>
$$
\mathbf{K}_{k}=\check{\mathbf{P}}_{k} \mathbf{H}_{k}^{T}\left(\mathbf{H}_{k} \check{\mathbf{P}}_{k} \mathbf{H}_{k}^{T}+\mathbf{R}_{k}\right)^{-1}
$$
&lt;/li>
&lt;li>
&lt;p>Correction&lt;/p>
$$
\begin{aligned}
\hat{\mathbf{x}}_{k} &amp;=\check{\mathbf{x}}_{k}+\mathbf{K}_{k}\underbrace{\left(\mathbf{y}_{k}-\mathbf{H}_{k} \check{\mathbf{x}}_{k}\right)}_{\text{innovation}} \\
\hat{\mathbf{P}}_{k}&amp;=\left(\mathbf{I}-\mathbf{K}_{k} \mathbf{H}_{k}\right) \check{\mathbf{P}}_{k}
\end{aligned}
$$
&lt;/li>
&lt;/ul>
&lt;h3 id="summary">Summary&lt;/h3>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-19%2016.46.13.png" alt="截屏2022-07-19 16.46.13">&lt;/p>
&lt;h2 id="example">Example&lt;/h2>
&lt;p>Consider a self-driving vehicle estimating its own position.&lt;/p>
&lt;p>The state vector includes the position and its first derivative, velocity.&lt;/p>
$$
\mathbf{x}=\left[\begin{array}{c}
p \\
\frac{d p}{d t}=\dot{p}
\end{array}\right]
$$
&lt;p>Input is the scalar acceleration&lt;/p>
$$
\mathbf{u}=a=\frac{d^{2} p}{d t^{2}}
$$
&lt;p>THe linear dynamical system is&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Motion/Process model&lt;/p>
$$
\mathbf{x}_{k}=\left[\begin{array}{cc}
1 &amp; \Delta t \\
0 &amp; 1
\end{array}\right] \mathbf{x}_{k-1}+\left[\begin{array}{c}
0 \\
\Delta t
\end{array}\right] \mathbf{u}_{k-1}+\mathbf{w}_{k-1}
$$
&lt;/li>
&lt;li>
&lt;p>Position observation&lt;/p>
$$
y_{k}=\left[\begin{array}{ll}
1 &amp; 0
\end{array}\right] \mathbf{x}_{k}+v_{k}
$$
&lt;/li>
&lt;li>
&lt;p>Nose densities&lt;/p>
$$
v_{k} \sim \mathcal{N}(0,0.05) \quad \mathbf{w}_{k} \sim \mathcal{N}\left(\mathbf{0},(0.1) \mathbf{1}_{2 \times 2}\right)
$$
&lt;/li>
&lt;/ul>
&lt;p>Given the data at time step $k=0$&lt;/p>
$$
\begin{array}{l}
\hat{\mathbf{x}}_{0} \sim \mathcal{N}\left(\left[\begin{array}{l}
0 \\
5
\end{array}\right],\left[\begin{array}{cc}
0.01 &amp; 0 \\
0 &amp; 1
\end{array}\right]\right) \\
\Delta t=0.5 \mathrm{~s} \\
u_{0}=-2\left[\mathrm{~m} / \mathrm{s}^{2}\right] \quad y_{1}=2.2[\mathrm{~m}]
\end{array}
$$
&lt;p>We want to estimate the state at time step $k=1$.&lt;/p>
&lt;p>Prediction step&lt;/p>
$$
\begin{aligned}
\check{\mathbf{x}}_{k} &amp;=\mathbf{F}_{k-1} \mathbf{x}_{k-1}+\mathbf{G}_{k-1} \mathbf{u}_{k-1} \\\\
{\left[\begin{array}{c}
\check{p}_{1} \\
\check{p}_{1}
\end{array}\right] } &amp;=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{l}
0 \\
5
\end{array}\right]+\left[\begin{array}{c}
0 \\
0.5
\end{array}\right](-2)=\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]
\end{aligned}
$$
$$
\begin{aligned}
\check{\mathbf{P}}_{k} &amp;=\mathbf{F}_{k-1} \hat{\mathbf{P}}_{k-1} \mathbf{F}_{k-1}^{T}+\mathbf{Q}_{k-1} \\\\
\check{\mathbf{P}}_{1} &amp;=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{cc}
0.01 &amp; 0 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]^{T}+\left[\begin{array}{cc}
0.1 &amp; 0 \\
0 &amp; 0.1
\end{array}\right]=\left[\begin{array}{cc}
0.36 &amp; 0.5 \\
0.5 &amp; 1.1
\end{array}\right]
\end{aligned}
$$
&lt;p>Correction step&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Kalman Gain&lt;/p>
$$
\begin{aligned}
\mathbf{K}_{1} &amp;=\check{\mathbf{P}}_{1} \mathbf{H}_{1}^{T}\left(\mathbf{H}_{1} \check{\mathbf{P}}_{1} \mathbf{H}_{1}^{T}+\mathbf{R}_{1}\right)^{-1} \\
&amp;\left.=\left[\begin{array}{cc}
0.36 &amp; 0.5 \\
0.5 &amp; 1.1
\end{array}\right]\left[\begin{array}{l}
1 \\
0
\end{array}\right]\left(\begin{array}{ll}
1 &amp; 0
\end{array}\right]\left[\begin{array}{cc}
0.36 &amp; 0.5 \\
0.5 &amp; 1.1
\end{array}\right]\left[\begin{array}{l}
1 \\
0
\end{array}\right]+0.05\right)^{-1} \\
&amp;=\left[\begin{array}{l}
0.88 \\
1.22
\end{array}\right]
\end{aligned}
$$
&lt;/li>
&lt;li>
&lt;p>Correction of the state prediction&lt;/p>
$$
\begin{aligned}
\hat{\mathbf{x}}_{1} &amp;=\check{\mathbf{x}}_{1}+\mathbf{K}_{1}\left(\mathbf{y}_{1}-\mathbf{H}_{1} \check{\mathbf{x}}_{1}\right) \\\\
{\left[\begin{array}{c}
\hat{p}_{1} \\
\hat{\dot{p}}_{1}
\end{array}\right] } &amp;=\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]+\left[\begin{array}{c}
0.88 \\
1.22
\end{array}\right]\left(2.2-\left[\begin{array}{ll}
1 &amp; 0
\end{array}\right]\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)=\left[\begin{array}{l}
2.24 \\
3.63
\end{array}\right]
\end{aligned}
$$
&lt;/li>
&lt;li>
&lt;p>Correction of covariance&lt;/p>
$$
\begin{aligned}
\hat{\mathbf{P}}_{1} &amp;=\left(\mathbf{1}-\mathbf{K}_{1} \mathbf{H}_{1}\right) \check{\mathbf{P}}_{1} \\\\
&amp;=\left[\begin{array}{ll}
0.04 &amp; 0.06 \\
0.06 &amp; 0.49
\end{array}\right]
\end{aligned}
$$
&lt;blockquote>
&lt;p>Note that the final covariance (&lt;em>i.e.&lt;/em> the covariance after correction) is smaller. That is, we are more certain about the car position after we incoporate the position measurement. This uncertainty reduction occurs because our measurement model is fairly accurate (the measurement noise variance is quite small).&lt;/p>
&lt;/blockquote>
&lt;/li>
&lt;/ul>
&lt;h2 id="best-linear-unbiased-estimator-blue">Best Linear Unbiased Estimator (BLUE)&lt;/h2>
&lt;p>If we have white, uncorrelated zero-mean noise, the Kalman Fitler is the best (&lt;em>i.e.&lt;/em>, lowest variance) unbiased estimator that uses only a linear combination of measurements.&lt;/p>
&lt;h3 id="bias">Bias&lt;/h3>
&lt;p>We repeat the above Kalman filter for $K$ times.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-19%2022.29.32.png" alt="截屏2022-07-19 22.29.32" style="zoom:67%;" />
&lt;p>The &lt;mark>&lt;strong>bias&lt;/strong>&lt;/mark> is defined as the difference between true position and the mean of estimated position values.&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-19%2022.32.03.png" alt="截屏2022-07-19 22.32.03">&lt;/p>
&lt;p>An estimator of filter is &lt;mark>&lt;strong>unbiased&lt;/strong>&lt;/mark> if it produces an &amp;ldquo;average&amp;rdquo; error of zero at a particular time step $k$, over many trials.&lt;/p>
$$
E\left[\hat{e}_{k}\right]=E\left[\hat{p}_{k}-p_{k}\right]=E\left[\hat{p}_{k}\right]-p_{k}=0 \qquad \forall k \in \mathbb{N}
$$
&lt;h4 id="bias-in-kalman-filter-state-estimation">Bias in Kalman filter state estimation&lt;/h4>
&lt;p>Consider the error dynamics&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Predicted state error&lt;/p>
$$
\check{\mathbf{e}}_{k}=\check{\mathbf{x}}_{k}-\mathbf{x}_{k}
$$
&lt;/li>
&lt;li>
&lt;p>Corrected state error&lt;/p>
$$
\hat{\mathbf{e}}_{k}=\hat{\mathbf{x}}_{k}-\mathbf{x}_{k}
$$
&lt;/li>
&lt;/ul>
&lt;p>Using the Kalman Fitler equations, we can derive&lt;/p>
$$
\begin{array}{l}
\check{\mathbf{e}}_{k}=\mathbf{F}_{k-1} \check{\mathbf{e}}_{k-1}-\mathbf{w}_{k} \\
\hat{\mathbf{e}}_{k}=\left(\mathbf{1}-\mathbf{K}_{k} \mathbf{H}_{k}\right) \check{\mathbf{e}}_{k}+\mathbf{K}_{k} \mathbf{v}_{k}
\end{array}
$$
&lt;p>So long as&lt;/p>
&lt;ul>
&lt;li>The initial state estimate is unbiased ($E\left[\hat{\mathbf{e}}_{0}\right]=\mathbf{0}$
)&lt;/li>
&lt;li>The noise is white, uncorrelated and zero mean ($E[\mathbf{v}]=\mathbf{0}, E[\mathbf{w}]=\mathbf{0}$
)&lt;/li>
&lt;/ul>
&lt;p>Then the state estiamte is unbiased&lt;/p>
$$
\begin{aligned}
E\left[\check{\mathbf{e}}_{k}\right] &amp;=E\left[\mathbf{F}_{k-1} \check{\mathbf{e}}_{k-1}-\mathbf{w}_{k}\right] \\
&amp;=\mathbf{F}_{k-1} E\left[\check{\mathbf{e}}_{k-1}\right]-E\left[\mathbf{w}_{k}\right] \\
&amp;=\mathbf{0}
\end{aligned}
$$
$$
\begin{aligned}
E\left[\hat{\mathbf{e}}_{k}\right] &amp;=E\left[\left(\mathbf{1}-\mathbf{K}_{k} \mathbf{H}_{k}\right) \check{\mathbf{e}}_{k}+\mathbf{K}_{k} \mathbf{v}_{k}\right] \\
&amp;=\left(\mathbf{1}-\mathbf{K}_{k} \mathbf{H}_{k}\right) E\left[\check{\mathbf{e}}_{k}\right]+\mathbf{K}_{k} E\left[\mathbf{v}_{k}\right] \\
&amp;=\mathbf{0}
\end{aligned}
$$
&lt;h3 id="consistency">Consistency&lt;/h3>
&lt;p>A filter is &lt;mark>&lt;strong>consistent&lt;/strong>&lt;/mark> if for all $k$&lt;/p>
$$
E\left[\hat{e}_{k}^{2}\right]=E\left[\left(\hat{p}_{k}-p_{k}\right)^{2}\right]=\hat{P}_{k}
$$
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-19%2023.22.43.png" alt="截屏2022-07-19 23.22.43" style="zoom:50%;" />
&lt;p>This means that the filter is neither overconfident nor underconfident in the estimate it has produced.&lt;/p>
&lt;p>The Kalman Fitler is consistent in state estimate.&lt;/p>
$$
E\left[\check{\mathbf{e}}_{k} \check{\mathbf{e}}_{k}^{T}\right]=\check{\mathbf{P}}_{k} \qquad E\left[\hat{\mathbf{e}}_{k} \hat{\mathbf{e}}_{k}^{T}\right]=\hat{\mathbf{P}}_{k}
$$
&lt;p>so long as&lt;/p>
&lt;ul>
&lt;li>the initial state estimate is consistent ($E\left[\hat{\mathbf{e}}_{0} \hat{\mathbf{e}}_{0}^{T}\right]=\check{\mathbf{P}}_{0}$
)&lt;/li>
&lt;li>the noise is white and zero-mean ($E[\mathbf{v}]=\mathbf{0}, E[\mathbf{w}]=\mathbf{0}$
)&lt;/li>
&lt;/ul>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://www.coursera.org/lecture/state-estimation-localization-self-driving-cars/lesson-1-the-linear-kalman-filter-7DFmY">The (Linear) Kalman Filter&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Extended Kalman Filter</title><link>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/ekf/</link><pubDate>Wed, 20 Jul 2022 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/ekf/</guid><description>&lt;h2 id="motivation">Motivation&lt;/h2>
&lt;p>Linear systems do not exist in reality. We have to deal with nonlinear discrete-time systems&lt;/p>
$$
\begin{aligned}
\underbrace{\mathbf{x}_{k}}_{\text{current state}}&amp;=\mathbf{f}_{k-1}(\underbrace{\mathbf{x}_{k-1}}_{\text{previous state}}, \underbrace{\mathbf{u}_{k-1}}_{\text{inputs}}, \underbrace{\mathbf{w}_{k-1}}_{\text{process noise}}) \\\\
\underbrace{\mathbf{y}_{k}}_{\text{measurement}}&amp;=\mathbf{h}_{k}(\mathbf{x}_{k}, \underbrace{\mathbf{v}_{k}}_{\text{measurement noise}})
\end{aligned}
$$
&lt;p>How can we adapt Kalman Filter to nonlinear discrete-time systems?&lt;/p>
&lt;h2 id="-idea-linearizing-a-nonlinear-system">💡 Idea: Linearizing a Nonlinear System&lt;/h2>
&lt;p>Choose an operating point $a$ and approxiamte the nonlinear function by a tangent line at that point.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-20%2017.02.10.png" alt="截屏2022-07-20 17.02.10" style="zoom: 33%;" />
&lt;p>We compute this linear approximation using a &lt;strong>first-order Taylor expansion&lt;/strong>&lt;/p>
$$
f(x) \approx f(a)+\left.\frac{\partial f(x)}{\partial x}\right|_{x=a}(x-a)
$$
&lt;h2 id="extended-kalman-filter">Extended Kalman Filter&lt;/h2>
&lt;p>For EKF, we choose the operationg point to be our most recent state estimate, our known input, and zero noise.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Linearized motion model&lt;/p>
$$
\mathbf{x}_{k}=\mathbf{f}_{k-1}\left(\mathbf{x}_{k-1}, \mathbf{u}_{k-1}, \mathbf{w}_{k-1}\right) \approx \mathbf{f}_{k-1}\left(\hat{\mathbf{x}}_{k-1}, \mathbf{u}_{k-1}, \mathbf{0}\right) + \underbrace{\left.\frac{\partial \mathbf{f}_{k-1}}{\partial \mathbf{x}_{k-1}}\right|_{\hat{\mathbf{x}}_{k-1}, \mathbf{u}_{k-1}, \mathbf{0}}}_{\mathbf{F}_{k-1}}\left(\mathbf{x}_{k-1}-\hat{\mathbf{x}}_{k-1}\right)+\underbrace{\left.\frac{\partial \mathbf{f}_{k-1}}{\partial \mathbf{w}_{k-1}}\right|_{\hat{\mathbf{x}}_{k-1}, \mathbf{u}_{k-1}, \mathbf{0}}}_{\mathbf{L}_{k-1}} \mathbf{w}_{k-1}
$$
&lt;/li>
&lt;li>
&lt;p>Linearized measurement model&lt;/p>
$$
\mathbf{y}_{k}=\mathbf{h}_{k}\left(\mathbf{x}_{k}, \mathbf{v}_{k}\right) \approx \mathbf{h}_{k}\left(\check{\mathbf{x}}_{k}, \mathbf{0}\right)+\underbrace{\left.\frac{\partial \mathbf{h}_{k}}{\partial \mathbf{x}_{k}}\right|_{\check{\mathbf{x}}_{k}, \mathbf{0}}}_{\mathbf{H}_{k}}\left(\mathbf{x}_{k}-\check{\mathbf{x}}_{k}\right)+\underbrace{\left.\frac{\partial \mathbf{h}_{k}}{\partial \mathbf{v}_{k}}\right|_{\check{\mathbf{x}}_{k}, \mathbf{0}}}_{\mathbf{M}_{k}} \mathbf{v}_{k}
$$
&lt;/li>
&lt;/ul>
&lt;p>$\mathbf{F}_{k-1}, \mathbf{L}_{k-1}, \mathbf{H}_{k}, \mathbf{M}_{k}$
are Jacobian matrices.&lt;/p>
&lt;blockquote>
&lt;p>Intuitively, the Jacobian matrix tells you how fast each output of the function is changing along each input dimension.&lt;/p>
&lt;/blockquote>
&lt;p>With our linearized models and Jacobians, we can now use the Kalman Filter equations.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-20%2017.21.13.png" alt="截屏2022-07-20 17.21.13" style="zoom:50%;" />
&lt;div class="flex px-4 py-3 mb-6 rounded-md bg-primary-100 dark:bg-primary-900">
&lt;span class="pr-3 pt-1 text-primary-600 dark:text-primary-300">
&lt;svg height="24" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="m11.25 11.25l.041-.02a.75.75 0 0 1 1.063.852l-.708 2.836a.75.75 0 0 0 1.063.853l.041-.021M21 12a9 9 0 1 1-18 0a9 9 0 0 1 18 0m-9-3.75h.008v.008H12z"/>&lt;/svg>
&lt;/span>
&lt;span class="dark:text-neutral-300">We still use the nonlinear model to propagate the mean of the state estimate in prediction step and compute the measurement residual innovation in correction step.&lt;/span>
&lt;/div>
&lt;h2 id="example">Example&lt;/h2>
&lt;p>Similar to the self-driving car localisation example in Linear Kalman Fitler, but this time we use an onboard sensor, a camera, to measure the altitude of distant landmarks relative to the horizon.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-20%2017.30.16.png" alt="截屏2022-07-20 17.30.16" style="zoom: 33%;" />
&lt;p>($S$ and $D$ are known in advance)&lt;/p>
&lt;p>Sate:&lt;/p>
$$
\mathbf{x}=\left[\begin{array}{l}
p \\
\dot{p}
\end{array}\right]
$$
&lt;p>Input:&lt;/p>
$$
\mathbf{u}=\ddot{p}
$$
&lt;p>Motion/Process model&lt;/p>
$$
\begin{aligned}
\mathbf{x}_{k} &amp;=\mathbf{f}\left(\mathbf{x}_{k-1}, \mathbf{u}_{k-1}, \mathbf{w}_{k-1}\right) \\\\
&amp;=\left[\begin{array}{cc}
1 &amp; \Delta t \\
0 &amp; 1
\end{array}\right] \mathbf{x}_{k-1}+\left[\begin{array}{c}
0 \\
\Delta t
\end{array}\right] \mathbf{u}_{k-1}+\mathbf{w}_{k-1}
\end{aligned}
$$
&lt;p>Landmark measurement model (nonlinear!)&lt;/p>
$$
\begin{aligned}
y_{k} &amp;=\phi_{k}=h\left(p_{k}, v_{k}\right) \\\\
&amp;=\tan ^{-1}\left(\frac{S}{D-p_{k}}\right)+v_{k}
\end{aligned}
$$
&lt;p>Noise densities&lt;/p>
$$
v_{k} \sim \mathcal{N}(0,0.01) \quad \mathbf{w}_{k} \sim \mathcal{N}\left(\mathbf{0},(0.1) \mathbf{1}_{2 \times 2}\right)
$$
&lt;p>The Jacobian matrices in this example are:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Motion model Jacobians&lt;/p>
$$
\begin{array}{l}
\mathbf{F}_{k-1}=\left.\frac{\partial \mathbf{f}}{\partial \mathbf{x}_{k-1}}\right|_{\hat{\mathbf{x}}_{k-1}, \mathbf{u}_{k-1}, \mathbf{0}}=\left[\begin{array}{cc}
1 &amp; \Delta t \\
0 &amp; 1
\end{array}\right] \\\\
\mathbf{L}_{k-1}=\left.\frac{\partial \mathbf{f}}{\partial \mathbf{w}_{k-1}}\right|_{\hat{\mathbf{x}}_{k-1}, \mathbf{u}_{k-1}, \mathbf{0}}=\mathbf{1}_{2 \times 2}
\end{array}
$$
&lt;/li>
&lt;li>
&lt;p>Measurement model Jacobians&lt;/p>
$$
\begin{array}{l}
\mathbf{H}_{k}=\left.\frac{\partial h}{\partial \mathbf{x}_{k}}\right|_{\check{\mathbf{x}}_{k}, \mathbf{0}}=\left[\begin{array}{ll}
\frac{S}{\left(D-\breve{p}_{k}\right)^{2}+S^{2}} &amp; 0
\end{array}\right] \\\\
M_{k}=\left.\frac{\partial h}{\partial v_{k}}\right|_{\check{\mathbf{x}}_{k}, \mathbf{0}}=1
\end{array}
$$
&lt;/li>
&lt;/ul>
&lt;p>Given&lt;/p>
$$
\begin{array}{l}
\hat{\mathbf{x}}_{0} \sim \mathcal{N}\left(\left[\begin{array}{l}
0 \\
5
\end{array}\right], \quad\left[\begin{array}{cc}
0.01 &amp; 0 \\
0 &amp; 1
\end{array}\right]\right)\\
\Delta t=0.5 \mathrm{~s}\\
u_{0}=-2\left[\mathrm{~m} / \mathrm{s}^{2}\right] \quad y_{1}=\pi / 6[\mathrm{rad}]\\
S=20[m] \quad D=40[m]
\end{array}
$$
&lt;p>What is the position estimate $\hat{p}_1$?&lt;/p>
&lt;p>Prediction:&lt;/p>
$$
\begin{array}{c}
\check{\mathbf{x}}_{1}=\mathbf{f}_{0}\left(\hat{\mathbf{x}}_{0}, \mathbf{u}_{0}, \mathbf{0}\right) \\
{\left[\begin{array}{c}
\check{p}_{1} \\
\dot{p}_{1}
\end{array}\right]=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{l}
0 \\
5
\end{array}\right]+\left[\begin{array}{c}
0 \\
0.5
\end{array}\right](-2)=\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]} \\\\
\check{\mathbf{P}}_{1}=\mathbf{F}_{0} \hat{\mathbf{P}}_{0} \mathbf{F}_{0}^{T}+\mathbf{L}_{0} \mathbf{Q}_{0} \mathbf{L}_{0}^{T} \\
\check{\mathbf{P}}_{1}=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{cc}
0.01 &amp; 0 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{cc}
1 &amp; 0 \\
0.5 &amp; 1
\end{array}\right]+\left[\begin{array}{cc}
1 &amp; 0 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{cc}
0.1 &amp; 0 \\
0 &amp; 0.1
\end{array}\right]\left[\begin{array}{cc}
1 &amp; 0 \\
0 &amp; 1
\end{array}\right]=\left[\begin{array}{cc}
0.36 &amp; 0.5 \\
0.5 &amp; 1.1
\end{array}\right]
\end{array}
$$
&lt;p>Correction:&lt;/p>
$$
\begin{aligned}
\mathbf{K}_{1} &amp;=\check{\mathbf{P}}_{1} \mathbf{H}_{1}^{T}\left(\mathbf{H}_{1} \check{\mathbf{P}}_{1} \mathbf{H}_{1}^{T}+\mathbf{M}_{1} \mathbf{R}_{1} \mathbf{M}_{1}^{T}\right)^{-1} \\
&amp;=\left[\begin{array}{cc}
0.36 &amp; 0.5 \\
0.5 &amp; 1.1
\end{array}\right]\left[\begin{array}{c}
0.011 \\
0
\end{array}\right]\left(\left[\begin{array}{ll}
0.011 &amp; 0
\end{array}\right]\left[\begin{array}{cc}
0.36 &amp; 0.5 \\
0.5 &amp; 1.1
\end{array}\right]\left[\begin{array}{c}
0.011 \\
0
\end{array}\right]+1(0.01)(1)\right)^{-1} \\
&amp;=\left[\begin{array}{c}
0.40 \\
0.55
\end{array}\right]
\end{aligned}
$$
$$
\begin{aligned}
\hat{\mathbf{x}}_{1} &amp;=\check{\mathbf{x}}_{1}+\mathbf{K}_{1}\left(\mathbf{y}_{1}-\mathbf{h}_{1}\left(\check{\mathbf{x}}_{1}, \mathbf{0}\right)\right) \\
{\left[\begin{array}{c}
\hat{p}_{1} \\
\hat{\dot{p}}_{1}
\end{array}\right]}&amp;={\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]+\left[\begin{array}{c}
0.40 \\
0.55
\end{array}\right](0.52-0.49)=\left[\begin{array}{c}
2.51 \\
4.02
\end{array}\right] }
\end{aligned}
$$
$$
\begin{aligned}
\hat{\mathbf{P}}_{1} &amp;=\left(\mathbf{1}-\mathbf{K}_{1} \mathbf{H}_{1}\right) \check{\mathbf{P}}_{1} \\
&amp;=\left[\begin{array}{cc}
0.36 &amp; 0.50 \\
0.50 &amp; 1.1
\end{array}\right]
\end{aligned}
$$
&lt;h2 id="summary">Summary&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>The EKF uses &lt;em>linearization&lt;/em> to adapt the Kalman Filter to nonlinear systems&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Linearization works by computing a local linear apporximation to a nonlinear function using the first-order Taylor expansion on a chosen operating point (in this case, the last state estimate)&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://www.coursera.org/lecture/state-estimation-localization-self-driving-cars/lesson-3-going-nonlinear-the-extended-kalman-filter-qIyk3">Going Nonlinear - The Extended Kalman Filter&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Error State Extended Kalman Filter (ES-EKF)</title><link>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/esekf/</link><pubDate>Wed, 20 Jul 2022 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/esekf/</guid><description>&lt;h2 id="whats-in-a-state">What&amp;rsquo;s in a State?&lt;/h2>
&lt;p>We can think of the vehicle state as composed of two parts&lt;/p>
$$
\underbrace{\mathbf{x}}_{\text{True state|}}=\underbrace{\hat{\mathbf{x}}}_{\text{Nominal state ("Large")}}+\underbrace{\delta \mathbf{x}}_{\text{Error state ("small")}}
$$
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-20%2022.29.28.png" alt="截屏2022-07-20 22.29.28" style="zoom: 33%;" />
&lt;h2 id="-idea">💡 Idea&lt;/h2>
&lt;p>Instead of doing Kalman Filter in the full state (which might have lots of complicated nonlinear behaviours). we use the EKF to directly estimate the error state instead, and then use the estimate of the error state as a correction for nominal state.&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-20%2022.33.53.png" alt="截屏2022-07-20 22.33.53">&lt;/p>
&lt;h2 id="es-ekf-steps">ES-EKF Steps&lt;/h2>
&lt;p>Loop&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Update nominal state with motion model (for a bunch of times until getting the measurement)&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-20%2022.36.51.png" alt="截屏2022-07-20 22.36.51" style="zoom: 33%;" />
&lt;/li>
&lt;li>
&lt;p>Propagate uncertainty (for a bunch of times until getting the measurement)&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-20%2022.37.38.png" alt="截屏2022-07-20 22.37.38" style="zoom: 33%;" />
&lt;/li>
&lt;li>
&lt;p>If a measurement is available&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Compute Kalman Gain&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Compute error state&lt;/p>
$$
\delta \hat{\mathbf{x}}_{k}=\mathbf{K}_{k}\left(\mathbf{y}_{k}-\mathbf{h}_{k}\left(\check{\mathbf{x}}_{k}, \mathbf{0}\right)\right)
$$
&lt;/li>
&lt;li>
&lt;p>Correct nominal state&lt;/p>
$$
\hat{\mathbf{x}}_{k}=\check{\mathbf{x}}_{k}+\delta \hat{\mathbf{x}}_{k}
$$
&lt;/li>
&lt;li>
&lt;p>Correct state covariance&lt;/p>
$$
\hat{\mathbf{P}}_{k}=\left(\mathbf{1}-\mathbf{K}_{k} \mathbf{H}_{k}\right) \check{\mathbf{P}}_{k}
$$
&lt;/li>
&lt;/ol>
&lt;/li>
&lt;/ol>
&lt;h2 id="why-use-the-es-ekf">Why Use the ES-EKF?&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Better performance compared to the vanilla EKF&lt;/strong>&lt;/p>
&lt;p>The &amp;ldquo;small&amp;rdquo; error state is more amenable to linear filtering than the &amp;ldquo;large&amp;rdquo; nominal state, which can be integrated nonlinearly&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Easy to work with constrained quantities (e.g. rotations in 3D)&lt;/strong>&lt;/p>
&lt;p>We can also break down the state using a generalized composition operator&lt;/p>
$$
\underbrace{\mathbf{x}}_{\text{true state}}=\underbrace{\hat{\mathbf{x}}}_{\text{Nominal state (constrained)}} \bigoplus \underbrace{\delta\mathbf{x}}_{\text{Error state (unconstrained)}}
$$
&lt;/li>
&lt;/ul>
&lt;h2 id="summary">Summary&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>The error-state formulation separates the state into a &amp;ldquo;large&amp;rdquo; nominal state and a &amp;ldquo;small&amp;rdquo; error state&lt;/p>
&lt;ul>
&lt;li>nominal state: keeps track of the motion model, predicts what the state should be&lt;/li>
&lt;li>error state: captures the modeling errors and the process noise that accumulate over time&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>The ES-EKF uses local linearization to estimate the error state and uses it to correct the nominal state&lt;/p>
&lt;/li>
&lt;li>
&lt;p>The ES-EKF can perform better thant the vanilla EKF, and provides a natural way to handle constrained quantities like 3D rotations&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://www.coursera.org/lecture/state-estimation-localization-self-driving-cars/lesson-4-an-improved-ekf-the-error-state-extended-kalman-filter-7Nwfw">An Improved EKF - The Error State Extended Kalman Filter&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>EKF Limitations</title><link>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/ekf_limitations/</link><pubDate>Thu, 21 Jul 2022 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/ekf_limitations/</guid><description>&lt;h2 id="linearization-error">Linearization Error&lt;/h2>
&lt;p>Recap: The EKF works by linearizing the nonlinear motion and measurement models to update the mean and covariance of the state.&lt;/p>
&lt;p>The difference between the linear approximation and the nonlinear function is called &lt;mark>&lt;strong>linearization error&lt;/strong>&lt;/mark>&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-21%2015.51.52.png" alt="截屏2022-07-21 15.51.52" style="zoom: 33%;" />
&lt;p>In general, linearization error depends on&lt;/p>
&lt;ul>
&lt;li>How nonlinear the function is&lt;/li>
&lt;li>How far away from the operating poitn the linear approximation is being used&lt;/li>
&lt;/ul>
&lt;h2 id="example-polar-coordinates-rightarrow-cartesian-coordinates">Example: Polar Coordinates $\rightarrow$ Cartesian Coordinates&lt;/h2>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-21%2015.55.56.png" alt="截屏2022-07-21 15.55.56">&lt;/p>
&lt;p>Now we perform linearized transformation:&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-21%2015.57.09.png" alt="截屏2022-07-21 15.57.09">&lt;/p>
&lt;p>Compare the linearized and nonlinearized output distributions:&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-21%2015.56.53.png" alt="截屏2022-07-21 15.56.53">&lt;/p>
&lt;ul>
&lt;li>The mean of the linearized distribution is a very different place than the true mean&lt;/li>
&lt;li>The linearized covariance seriously underestimates the spread of the true output distribution along the $y$-dimension&lt;/li>
&lt;/ul>
&lt;p>$\rightarrow$ In this case, the linearization error can cause our belief of the output distribution to completely miss the mark, and this can cause big problem in our estimator!&lt;/p>
&lt;h2 id="limitations-of-the-ekf">Limitations of the EKF&lt;/h2>
&lt;h3 id="linearization-errors">Linearization errors&lt;/h3>
&lt;p>The EKF is prone to linearization error when&lt;/p>
&lt;ul>
&lt;li>The system dynamics are highly nonlinear&lt;/li>
&lt;li>The sensor sampling is slow relative how fast the system is evolving&lt;/li>
&lt;/ul>
&lt;p>This has two important consequences&lt;/p>
&lt;ul>
&lt;li>The estimated mean state can become very different from the true state&lt;/li>
&lt;li>The estimated state covariance can fail to capture the true uncertainty in the state&lt;/li>
&lt;/ul>
&lt;p>&lt;span style="color: Red">$\Rightarrow$ Linearization error can cause the estimator to be overconfident in a wrong answer!&lt;/span>&lt;/p>
&lt;h3 id="computing-jacobians">Computing Jacobians&lt;/h3>
&lt;p>Computing Jacobian matrices for complicated nonlinear functions is also a common source of error in EKF implementations!&lt;/p>
&lt;ul>
&lt;li>Analytical differentiation is prone to human error&lt;/li>
&lt;li>Numerical differentiation can be slow and unstable&lt;/li>
&lt;li>Automatic differentiation (e.g., at compile time) can also behave unpredictably&lt;/li>
&lt;/ul>
&lt;p>And what if one or more of our models is non-differentiable (&lt;em>e.g.&lt;/em> the step function)?&lt;/p>
&lt;h2 id="summary">Summary&lt;/h2>
&lt;ul>
&lt;li>The EKF uses analytical local linearization and, as a result, is sensitive to linearization errors&lt;/li>
&lt;li>For highly nonlinear systems, the EKF estimate can diverge and become unreliable&lt;/li>
&lt;li>Computing complex Jacobin matrices is an error-prone process and must be done with substantial care&lt;/li>
&lt;/ul>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://www.coursera.org/lecture/state-estimation-localization-self-driving-cars/lesson-5-limitations-of-the-ekf-OCrZc">Limitations of the EKF&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Unscented Kalman Filter</title><link>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/ukf/</link><pubDate>Thu, 21 Jul 2022 00:00:00 +0000</pubDate><guid>https://haobin-tan.netlify.app/docs/notes/stochastische_informationsverarbeitung/understanding/ukf/</guid><description>&lt;h2 id="intuition">Intuition&lt;/h2>
&lt;p>&amp;ldquo;It is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function&amp;rdquo;&lt;/p>
&lt;h2 id="idea">Idea&lt;/h2>
&lt;p>We perform a nonlinear transformation $h(x)$ on the 1D gaussian distribution (left), the result is a more complicated 1D distribution (right). We already know the mean and the standard deviation of the input gaussian, and we want to figure out the mean and standard deviation of the output distribution using this information and the nonlinear function.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-21%2017.02.49.png" alt="截屏2022-07-21 17.02.49" style="zoom: 33%;" />
&lt;p>UKF gives us a way to do this ny three steps&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Choose sigma points from our input distribution&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-21%2017.08.42.png" alt="截屏2022-07-21 17.08.42" style="zoom: 25%;" />
&lt;/li>
&lt;li>
&lt;p>Transform sigma points&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-21%2017.10.05.png" alt="截屏2022-07-21 17.10.05" style="zoom:25%;" />
&lt;/li>
&lt;li>
&lt;p>Compute weighted mean and covariance of transformed sigma points, which will give us a good approximation of the true output distribution.&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-21%2017.11.07.png" alt="截屏2022-07-21 17.11.07" style="zoom:25%;" />
&lt;/li>
&lt;/ol>
&lt;h2 id="unscented-transform">Unscented Transform&lt;/h2>
&lt;h3 id="choosing-sigma-points">Choosing sigma points&lt;/h3>
&lt;p>For an $N$-dimensional PDF $\mathcal{N}\left(\mu_{x}, \Sigma_{x x}\right)$
, we need $2N+1$ sigma points&lt;/p>
&lt;ul>
&lt;li>One for the mean&lt;/li>
&lt;li>and the rest symmetrically distributed about the mean&lt;/li>
&lt;/ul>
&lt;p>1D:&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-21%2017.44.13.png" alt="截屏2022-07-21 17.44.13" style="zoom: 33%;" />
&lt;p>2D:&lt;/p>
&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/截屏2022-07-21%2017.44.42.png" alt="截屏2022-07-21 17.44.42" style="zoom: 33%;" />
&lt;p>Steps&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Compute the Cholesky Decomposition of the covariance matrix&lt;/p>
$$
\mathbf{L} \mathbf{L}^{T}=\boldsymbol{\Sigma}_{x x}
$$
&lt;/li>
&lt;li>
&lt;p>Calculate the sigma points&lt;/p>
$$
\begin{aligned}
\mathbf{x}_{0} &amp;=\boldsymbol{\mu}_{x} &amp; &amp; \\
\mathbf{x}_{i} &amp;=\boldsymbol{\mu}_{x}+\sqrt{N+\kappa} \operatorname{col}_{i} \mathbf{L} &amp; &amp; i=1, \ldots, N \\
\mathbf{x}_{i+N} &amp;=\boldsymbol{\mu}_{x}-\sqrt{N+\kappa} \operatorname{col}_{i} \mathbf{L} &amp; &amp; i=1, \ldots, N
\end{aligned}
$$
&lt;ul>
&lt;li>$\kappa$: tuning parameter. for Gaussian PDFs, setting $\kappa = 3 - N$ is a good choice.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;h3 id="transforming">Transforming&lt;/h3>
&lt;p>Pass each of the $2N + 1$ sigma points through the nonlinear function $\mathbf{h}(x)$&lt;/p>
$$
\mathbf{y}_{i}=\mathbf{h}\left(\mathbf{x}_{i}\right) \quad i=0, \ldots, 2 N
$$
&lt;h3 id="recombining">Recombining&lt;/h3>
&lt;p>Compute the mean and covariance of the output PDF&lt;/p>
&lt;ul>
&lt;li>
&lt;p>mean&lt;/p>
$$
\boldsymbol{\mu}_{y}=\sum_{i=0}^{2 N} \alpha_{i} \mathbf{y}_{i}
$$
&lt;/li>
&lt;li>
&lt;p>covariance&lt;/p>
$$
\boldsymbol{\Sigma}_{y y}=\sum_{i=0}^{2 N} \alpha_{i}\left(\mathbf{y}_{i}-\boldsymbol{\mu}_{y}\right)\left(\mathbf{y}_{i}-\boldsymbol{\mu}_{y}\right)^{T}
$$
&lt;/li>
&lt;/ul>
&lt;p>with weights&lt;/p>
$$
\alpha_{i}=\left\{\begin{array}{lr}
\frac{\kappa}{N+\kappa} &amp; i=0 \\
\frac{1}{2} \frac{1}{N+\kappa} &amp; \text { otherwise }
\end{array}\right.
$$
&lt;h3 id="example">Example&lt;/h3>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2010.51.19.png" alt="截屏2022-07-22 10.51.19">&lt;/p>
&lt;p>Compared to linearization (red), we can see the unscented transform (orange) gives a much better approximation!&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2010.52.26.png" alt="截屏2022-07-22 10.52.26">&lt;/p>
&lt;h2 id="the-unscented-kalman-filter-ukf">The Unscented Kalman Filter (UKF)&lt;/h2>
&lt;p>&lt;strong>💡Idea: Instead of approximating the system equations by linearizing, we will calculate sigma points and use the Unscented Transform to approxiamte the PDFs directly.&lt;/strong>&lt;/p>
&lt;h3 id="prediction-step">Prediction step&lt;/h3>
&lt;p>To propagate the state from time $k-1$ to time $k$, apply the Unscented Transform using the current best guess for the mean and covariance.&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Compute sigma points&lt;/p>
$$
\begin{array}{rlrl}
\hat{\mathbf{L}}_{k-1} \hat{\mathbf{L}}_{k-1}^{T} &amp; =\hat{\mathbf{P}}_{k-1} &amp; (\text{Cholesky Decomposition})&amp; \\
\hat{\mathbf{x}}_{k-1}^{(0)} &amp; =\hat{\mathbf{x}}_{k-1} &amp; &amp; \\
\hat{\mathbf{x}}_{k-1}^{(i)} &amp; =\hat{\mathbf{x}}_{k-1}+\sqrt{N+\kappa} \operatorname{col}_{i} \hat{\mathbf{L}}_{k-1} &amp; i=1 \ldots N \\
\hat{\mathbf{x}}_{k-1}^{(i+N)} &amp; =\hat{\mathbf{x}}_{k-1}-\sqrt{N+\kappa} \operatorname{col}_{i} \hat{\mathbf{L}}_{k-1} &amp; i=1 \ldots N
\end{array}
$$
&lt;/li>
&lt;li>
&lt;p>Propagate sigma points&lt;/p>
$$
\breve{\mathbf{x}}_{k}^{(i)}=\mathbf{f}_{k-1}\left(\hat{\mathbf{x}}_{k-1}^{(i)}, \mathbf{u}_{k-1}, \mathbf{0}\right) \quad i=0 \ldots 2 N
$$
&lt;/li>
&lt;li>
&lt;p>Compute predicted mean and covariance (under the assumption of additive noise)&lt;/p>
$$
\begin{aligned}
\alpha^{(i)} &amp;=\left\{\begin{array}{ll}
\frac{\kappa}{N+\kappa} &amp; i=0 \\
\frac{1}{2} \frac{1}{N+\kappa} &amp; \text { otherwise }
\end{array}\right.\\\\
\check{\mathbf{x}}_{k} &amp;=\sum_{i=0}^{2 N} \alpha^{(i)} \check{\mathbf{x}}_{k}^{(i)} \\\\
\check{\mathbf{P}}_{k} &amp;=\sum_{i=0}^{2 N} \alpha^{(i)}\left(\check{\mathbf{x}}_{k}^{(i)}-\check{\mathbf{x}}_{k}\right)\left(\check{\mathbf{x}}_{k}^{(i)}-\check{\mathbf{x}}_{k}\right)^{T}+\mathbf{Q}_{k-1}
\end{aligned}
$$
&lt;/li>
&lt;/ol>
&lt;h3 id="correction-step">Correction step&lt;/h3>
&lt;p>To correct the state estimate using measurement at time $k$, use the nonlinear measurement model and the sigma points from the prediction step to predict the measurements.&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Predict measurements from propagated sigma points&lt;/p>
$$
\hat{\mathbf{y}}_{k}^{(i)}=\mathbf{h}_{k}\left(\check{\mathbf{x}}_{k}^{(i)}, \mathbf{0}\right) \quad i=0 \ldots 2 N
$$
&lt;/li>
&lt;li>
&lt;p>Estimate mean and covariance of predicted measurements&lt;/p>
$$
\begin{array}{l}
\hat{\mathbf{y}}_{k}=\displaystyle \sum_{i=0}^{2 N} \alpha^{(i)} \hat{\mathbf{y}}_{k}^{(i)} \\
\mathbf{P}_{y}=\displaystyle\sum_{i=0}^{2 N} \alpha^{(i)}\left(\hat{\mathbf{y}}_{k}^{(i)}-\hat{\mathbf{y}}_{k}\right)\left(\hat{\mathbf{y}}_{k}^{(i)}-\hat{\mathbf{y}}_{k}\right)^{T}+\mathbf{R}_{k}
\end{array}
$$
&lt;/li>
&lt;li>
&lt;p>Compute cross-covariance and Kalman Gain&lt;/p>
$$
\begin{aligned}
\mathbf{P}_{x y} &amp;=\sum_{i=0}^{2 N} \alpha^{(i)}\left(\check{\mathbf{x}}_{k}^{(i)}-\check{\mathbf{x}}_{k}\right)\left(\hat{\mathbf{y}}_{k}^{(i)}-\hat{\mathbf{y}}_{k}\right)^{T} \\\\
\mathbf{K}_{k} &amp;=\mathbf{P}_{x y} \mathbf{P}_{y}^{-1}
\end{aligned}
$$
&lt;/li>
&lt;li>
&lt;p>Compute corrected mean and covariance&lt;/p>
$$
\begin{array}{l}
\hat{\mathbf{x}}_{k}=\check{\mathbf{x}}_{k}+\mathbf{K}_{k}\left(\mathbf{y}_{k}-\hat{\mathbf{y}}_{k}\right) \\
\hat{\mathbf{P}}_{k}=\check{\mathbf{P}}_{k}-\mathbf{K}_{k} \mathbf{P}_{y} \mathbf{K}_{k}^{T}
\end{array}
$$
&lt;/li>
&lt;/ol>
&lt;h2 id="ukf-example">UKF Example&lt;/h2>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2011.09.51-20220722122019063.png" alt="截屏2022-07-22 11.09.51">&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2011.10.52.png" alt="截屏2022-07-22 11.10.52">&lt;/p>
&lt;p>Prediction step&lt;/p>
&lt;p>2D state estimate $\Rightarrow N=2, \quad \kappa=3-N=1$&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Compute sigma points&lt;/p>
$$
\begin{aligned}
&amp;\hat{\mathbf{L}}_{0} \hat{\mathbf{L}}_{0}^{T}=\hat{\mathbf{P}}_{0} \\
&amp;{\left[\begin{array}{cc}
0.1 &amp; 0 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{cc}
0.1 &amp; 0 \\
0 &amp; 1
\end{array}\right]^{T}=\left[\begin{array}{cc}
0.01 &amp; 0 \\
0 &amp; 1
\end{array}\right]}
\end{aligned}
$$
$$
\begin{aligned}
\hat{\mathbf{x}}_{0}^{(0)} &amp;=\hat{\mathbf{x}}_{0} \\
\hat{\mathbf{x}}_{0}^{(i)} &amp;=\hat{\mathbf{x}}_{0}+\sqrt{N+\kappa} \operatorname{col}_{i} \hat{\mathbf{L}}_{0} \quad i=1 \ldots, N \\
\hat{\mathbf{x}}_{0}^{(i+N)} &amp;=\hat{\mathbf{x}}_{0}-\sqrt{N+\kappa} \operatorname{col}_{i} \hat{\mathbf{L}}_{0} \quad i=1, \ldots, N \\
\hat{\mathbf{x}}_{0}^{(0)} &amp;=\left[\begin{array}{l}
0 \\
5
\end{array}\right] \\
\hat{\mathbf{x}}_{0}^{(1)} &amp;=\left[\begin{array}{l}
0 \\
5
\end{array}\right]+\sqrt{3}\left[\begin{array}{c}
0.1 \\
0
\end{array}\right]=\left[\begin{array}{c}
0.2 \\
5
\end{array}\right] \\
\hat{\mathbf{x}}_{0}^{(2)} &amp;=\left[\begin{array}{l}
0 \\
5
\end{array}\right]+\sqrt{3}\left[\begin{array}{c}
0 \\
1
\end{array}\right]=\left[\begin{array}{c}
0 \\
6.7
\end{array}\right] \\
\hat{\mathbf{x}}_{0}^{(3)} &amp;=\left[\begin{array}{l}
0 \\
5
\end{array}\right]-\sqrt{3}\left[\begin{array}{c}
0.1 \\
0
\end{array}\right]=\left[\begin{array}{c}
-0.2 \\
5
\end{array}\right] \\
\hat{\mathbf{x}}_{0}^{(4)} &amp;=\left[\begin{array}{l}
0 \\
5
\end{array}\right]-\sqrt{3}\left[\begin{array}{l}
0 \\
1
\end{array}\right]=\left[\begin{array}{c}
0 \\
3.3
\end{array}\right]
\end{aligned}
$$
&lt;/li>
&lt;li>
&lt;p>Propagate sigma points&lt;/p>
$$
\begin{aligned}
&amp;\check{\mathbf{x}}_{1}^{(i)}=\mathbf{f}_{0}\left(\hat{\mathbf{x}}_{0}^{(i)}, \mathbf{u}_{0}, \mathbf{0}\right) \quad i=0, \ldots, 2 N \\
&amp;\check{\mathbf{x}}_{1}^{(0)}=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{c}
0 \\
5
\end{array}\right]+\left[\begin{array}{c}
0 \\
0.5
\end{array}\right](-2)=\left[\begin{array}{c}
2.5 \\
4
\end{array}\right] \\
&amp;\check{\mathbf{x}}_{1}^{(1)}=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{c}
0.2 \\
5
\end{array}\right]+\left[\begin{array}{c}
0 \\
0.5
\end{array}\right](-2)=\left[\begin{array}{c}
2.7 \\
4
\end{array}\right] \\
&amp;\check{\mathbf{x}}_{1}^{(2)}=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{c}
0 \\
6.7
\end{array}\right]+\left[\begin{array}{c}
0 \\
0.5
\end{array}\right](-2)=\left[\begin{array}{c}
3.4 \\
5.7
\end{array}\right] \\
&amp;\check{\mathbf{x}}_{1}^{(3)}=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{c}
-0.2 \\
5
\end{array}\right]+\left[\begin{array}{c}
0 \\
0.5
\end{array}\right](-2)=\left[\begin{array}{c}
2.3 \\
4
\end{array}\right] \\
&amp;\check{\mathbf{x}}_{1}^{(4)}=\left[\begin{array}{cc}
1 &amp; 0.5 \\
0 &amp; 1
\end{array}\right]\left[\begin{array}{c}
0 \\
3.3
\end{array}\right]+\left[\begin{array}{c}
0 \\
0.5
\end{array}\right](-2)=\left[\begin{array}{c}
1.6 \\
2.3
\end{array}\right]
\end{aligned}
$$
&lt;/li>
&lt;li>
&lt;p>Compute predicted mean and covariance (under the assumption of additive noise)&lt;/p>
$$
\begin{aligned}
\alpha^{(i)} &amp;=\left\{\begin{array}{l}
\frac{\kappa}{N+\kappa}=\frac{1}{2+1}=\frac{1}{3} \quad i=0 \\
\frac{1}{2} \frac{1}{N+\kappa}=\frac{1}{2} \frac{1}{2+1}=\frac{1}{6} \quad \text { otherwise }
\end{array}\right.\\\\ \\
\check{\mathbf{x}}_{k}&amp;=\sum_{i=0}^{2 N} \alpha^{(i)} \check{\mathbf{x}}_{k}^{(i)} \\
&amp;=\frac{1}{3}\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]+\frac{1}{6}\left[\begin{array}{c}
2.7 \\
4
\end{array}\right]+\frac{1}{6}\left[\begin{array}{l}
3.4 \\
5.7
\end{array}\right]+\frac{1}{6}\left[\begin{array}{c}
2.3 \\
4
\end{array}\right]+\frac{1}{6}\left[\begin{array}{l}
1.6 \\
2.3
\end{array}\right]=\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]
\end{aligned}
$$
$$
\begin{aligned}
\check{\mathbf{P}}_{k}=&amp; \sum_{i=0}^{2 N} \alpha^{(i)}\left(\check{\mathbf{x}}_{k}^{(i)}-\check{\mathbf{x}}_{k}\right)\left(\check{\mathbf{x}}_{k}^{(i)}-\check{\mathbf{x}}_{k}\right)^{T}+\mathbf{Q}_{k-1} \\
=&amp; \frac{1}{3}\left(\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)\left(\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)^{T}+\\
&amp; \frac{1}{6}\left(\left[\begin{array}{c}
2.7 \\
4
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)\left(\left[\begin{array}{c}
2.7 \\
4
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)^{T}+\frac{1}{6}\left(\left[\begin{array}{c}
3.4 \\
5.7
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)\left(\left[\begin{array}{c}
3.4 \\
5.7
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)^{T}+\\
&amp; \frac{1}{6}\left(\left[\begin{array}{c}
2.3 \\
4
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)\left(\left[\begin{array}{c}
2.3 \\
4
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)^{T}+\frac{1}{6}\left(\left[\begin{array}{c}
1.6 \\
2.3
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)\left(\left[\begin{array}{c}
1.6 \\
2.3
\end{array}\right]-\left[\begin{array}{c}
2.5 \\
4
\end{array}\right]\right)^{T}+\left[\begin{array}{cc}
0.1 &amp; 0 \\
0 &amp; 0.1
\end{array}\right] \\
=&amp; {\left[\begin{array}{cc}
0.36 &amp; 0.5 \\
0.5 &amp; 1.1
\end{array}\right] }
\end{aligned}
$$
&lt;/li>
&lt;/ol>
&lt;p>Correction step&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Predict measurements from propagated sigma points&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2011.47.22.png" alt="截屏2022-07-22 11.47.22">&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Estimate mean and covariance of predicted measurements&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2011.47.33.png" alt="截屏2022-07-22 11.47.33">&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Compute cross-covariance and Kalman Gain&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2011.49.43.png" alt="截屏2022-07-22 11.49.43">&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Compute corrected mean and covariance&lt;/p>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2011.49.59.png" alt="截屏2022-07-22 11.49.59">&lt;/p>
&lt;/li>
&lt;/ol>
&lt;h2 id="summary">Summary&lt;/h2>
&lt;ul>
&lt;li>The UKF use the unscented transform to adpat the Kalman filter to nonlinear systems.&lt;/li>
&lt;li>The unscented transform works by passing a small set of carefully chosen samples through a nonlinear system, and computing the mean and covariance of the outputs.&lt;/li>
&lt;li>The unscented transform does a better job of approixmating the output distribution than analytical local linearization, for similar computational cost&lt;/li>
&lt;/ul>
&lt;h2 id="comparision-of-nonlinear-kf">Comparision of Nonlinear KF&lt;/h2>
&lt;p>&lt;img src="https://raw.githubusercontent.com/EckoTan0804/upic-repo/master/uPic/%E6%88%AA%E5%B1%8F2022-07-22%2011.57.02.png" alt="截屏2022-07-22 11.57.02">&lt;/p>
&lt;h2 id="reference">Reference&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://www.coursera.org/lecture/state-estimation-localization-self-driving-cars/lesson-6-an-alternative-to-the-ekf-the-unscented-kalman-filter-voRRb">An Alternative to the EKF - The Unscented Kalman Filter&lt;/a>&lt;/li>
&lt;/ul></description></item></channel></rss>