<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Slides |</title><link>https://avivajpeyi.github.io/slides/</link><atom:link href="https://avivajpeyi.github.io/slides/index.xml" rel="self" type="application/rss+xml"/><description>Slides</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 01 Jan 2024 00:00:00 +0000</lastBuildDate><item><title>Example Talk: Recent Work</title><link>https://avivajpeyi.github.io/slides/example/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://avivajpeyi.github.io/slides/example/</guid><description>&lt;h1 id="example-talk"&gt;Example Talk&lt;/h1&gt;
&lt;h3 id="dr-alex-johnson--meta-ai"&gt;Dr. Alex Johnson · Meta AI&lt;/h3&gt;
&lt;hr&gt;
&lt;h2 id="research-overview"&gt;Research Overview&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Multimodal LLMs&lt;/li&gt;
&lt;li&gt;Efficient training&lt;/li&gt;
&lt;li&gt;Responsible AI&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="code--math"&gt;Code &amp;amp; Math&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;$$
E = mc^2
$$&lt;hr&gt;
&lt;h2 id="dual-column-layout"&gt;Dual Column Layout&lt;/h2&gt;
&lt;div class="r-hstack"&gt;
&lt;div style="flex: 1; padding-right: 1rem;"&gt;
&lt;h3 id="left-column"&gt;Left Column&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Point A&lt;/li&gt;
&lt;li&gt;Point B&lt;/li&gt;
&lt;li&gt;Point C&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 1rem;"&gt;
&lt;h3 id="right-column"&gt;Right Column&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Detail 1&lt;/li&gt;
&lt;li&gt;Detail 2&lt;/li&gt;
&lt;li&gt;Detail 3&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;!-- Alternative: Asymmetric columns --&gt;
&lt;div style="display: flex; gap: 2rem;"&gt;
&lt;div style="flex: 2;"&gt;
&lt;h3 id="main-content-23-width"&gt;Main Content (2/3 width)&lt;/h3&gt;
&lt;p&gt;This column takes up twice the space of the right column.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;example&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;code works too&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;div style="flex: 1;"&gt;
&lt;h3 id="sidebar-13-width"&gt;Sidebar (1/3 width)&lt;/h3&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;br&gt;
Key points in smaller column&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="image--text-layout"&gt;Image + Text Layout&lt;/h2&gt;
&lt;div class="r-hstack" style="align-items: center;"&gt;
&lt;div style="flex: 1;"&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;&lt;img src="https://images.unsplash.com/photo-1708011271954-c0d2b3155ded?w=400&amp;amp;dpr=2&amp;amp;h=400&amp;amp;auto=format&amp;amp;fit=crop&amp;amp;q=60&amp;amp;ixid=M3wxMjA3fDB8MXxzZWFyY2h8MTh8fG1hdGhlbWF0aWNzfGVufDB8fHx8MTc2NTYzNTEzMHww&amp;amp;ixlib=rb-4.1.0" alt="" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 2rem;"&gt;
&lt;h3 id="results"&gt;Results&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;95% accuracy&lt;/li&gt;
&lt;li&gt;10x faster inference&lt;/li&gt;
&lt;li&gt;Lower memory usage&lt;/li&gt;
&lt;/ul&gt;
&lt;span class="fragment " &gt;
&lt;strong&gt;Breakthrough!&lt;/strong&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="speaker-notes"&gt;Speaker Notes&lt;/h2&gt;
&lt;p&gt;Press &lt;strong&gt;S&lt;/strong&gt; to open presenter view with notes!&lt;/p&gt;
&lt;p&gt;This slide has hidden speaker notes below.&lt;/p&gt;
&lt;p&gt;Note:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This is a &lt;strong&gt;speaker note&lt;/strong&gt; (only visible in presenter view)&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to open presenter console&lt;/li&gt;
&lt;li&gt;Perfect for remembering key talking points&lt;/li&gt;
&lt;li&gt;Can include reminders, timing, references&lt;/li&gt;
&lt;li&gt;Supports &lt;strong&gt;Markdown&lt;/strong&gt; formatting too!&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="progressive-reveals"&gt;Progressive Reveals&lt;/h2&gt;
&lt;p&gt;Content appears step-by-step:&lt;/p&gt;
&lt;span class="fragment " &gt;
First point appears
&lt;/span&gt;
&lt;span class="fragment " &gt;
Then the second point
&lt;/span&gt;
&lt;span class="fragment " &gt;
Finally the conclusion
&lt;/span&gt;
&lt;span class="fragment highlight-red" &gt;
This one can be &lt;strong&gt;highlighted&lt;/strong&gt;!
&lt;/span&gt;
&lt;p&gt;Note:
Use fragments to control pacing and maintain audience attention. Each fragment appears on click.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="diagrams-with-mermaid"&gt;Diagrams with Mermaid&lt;/h2&gt;
&lt;div class="mermaid"&gt;graph LR
A[Research Question] --&gt; B{Hypothesis}
B --&gt;|Valid| C[Experiment]
B --&gt;|Invalid| D[Revise]
C --&gt; E[Analyze Data]
E --&gt; F{Significant?}
F --&gt;|Yes| G[Publish]
F --&gt;|No| D
&lt;/div&gt;
&lt;p&gt;Perfect for: Workflows, architectures, processes&lt;/p&gt;
&lt;p&gt;Note:
Mermaid diagrams are created from simple text. They&amp;rsquo;re version-controllable and edit anywhere!&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="research-results"&gt;Research Results&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Accuracy&lt;/th&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;Memory&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Baseline&lt;/td&gt;
&lt;td&gt;87.3%&lt;/td&gt;
&lt;td&gt;1.0x&lt;/td&gt;
&lt;td&gt;2GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ours (v1)&lt;/td&gt;
&lt;td&gt;92.1%&lt;/td&gt;
&lt;td&gt;1.5x&lt;/td&gt;
&lt;td&gt;1.8GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ours (v2)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;95.8%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2.3x&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.2GB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;&lt;strong&gt;Key Finding:&lt;/strong&gt; 8.5% improvement over baseline with 40% memory reduction&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Note:
Tables are perfect for comparative results. Markdown tables are simple and version-control friendly.&lt;/p&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-color="#1e3a8a"
&gt;
&lt;h2 id="custom-backgrounds"&gt;Custom Backgrounds&lt;/h2&gt;
&lt;p&gt;This slide has a &lt;strong&gt;blue background&lt;/strong&gt;!&lt;/p&gt;
&lt;p&gt;You can customize:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Background colors&lt;/li&gt;
&lt;li&gt;Background images&lt;/li&gt;
&lt;li&gt;Gradients&lt;/li&gt;
&lt;li&gt;Videos (yes, really!)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use &lt;code&gt;{{&amp;lt; slide background-color=&amp;quot;#hex&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="vertical-navigation"&gt;Vertical Navigation&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;There&amp;rsquo;s more content below! ⬇️&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Press the &lt;strong&gt;Down Arrow&lt;/strong&gt; to see substeps.&lt;/p&gt;
&lt;p&gt;Note:
This demonstrates Reveal.js&amp;rsquo;s vertical slide feature. Great for optional details or deep dives.&lt;/p&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
id="substep-1"
&gt;
&lt;h3 id="substep-1-details"&gt;Substep 1: Details&lt;/h3&gt;
&lt;p&gt;This is additional content in a vertical stack.&lt;/p&gt;
&lt;p&gt;Navigate down for more, or right to skip to next topic →&lt;/p&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
id="substep-2"
&gt;
&lt;h3 id="substep-2-more-details"&gt;Substep 2: More Details&lt;/h3&gt;
&lt;p&gt;Even more detailed information.&lt;/p&gt;
&lt;p&gt;Press &lt;strong&gt;Up Arrow&lt;/strong&gt; to go back, or &lt;strong&gt;Right Arrow&lt;/strong&gt; to continue.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="citations--quotes"&gt;Citations &amp;amp; Quotes&lt;/h2&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;&amp;ldquo;The best way to predict the future is to invent it.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;— Alan Kay&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Or reference research:&lt;/p&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;Recent work by Smith et al. (2024) demonstrates that Markdown-based slides improve reproducibility by 78% compared to proprietary formats&lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2 id="media-youtube-videos"&gt;Media: YouTube Videos&lt;/h2&gt;
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"&gt;
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/dQw4w9WgXcQ?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;p&gt;Note:
Embed YouTube videos with just the video ID. Perfect for demos, tutorials, or interviews.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="media-all-options"&gt;Media: All Options&lt;/h2&gt;
&lt;p&gt;Embed various media types with simple shortcodes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;YouTube&lt;/strong&gt;: &lt;code&gt;{{&amp;lt; youtube VIDEO_ID &amp;gt;}}&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bilibili&lt;/strong&gt;: &lt;code&gt;{{&amp;lt; bilibili id=&amp;quot;BV1...&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Local videos&lt;/strong&gt;: &lt;code&gt;{{&amp;lt; video src=&amp;quot;file.mp4&amp;quot; controls=&amp;quot;yes&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Audio&lt;/strong&gt;: &lt;code&gt;{{&amp;lt; audio src=&amp;quot;file.mp3&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Perfect for demos, interviews, tutorials, or podcasts!&lt;/p&gt;
&lt;p&gt;Note:
All media types work seamlessly in slides. Just use the appropriate shortcode.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="interactive-elements"&gt;Interactive Elements&lt;/h2&gt;
&lt;p&gt;Try these keyboard shortcuts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;→&lt;/code&gt; &lt;code&gt;←&lt;/code&gt; : Navigate slides&lt;/li&gt;
&lt;li&gt;&lt;code&gt;↓&lt;/code&gt; &lt;code&gt;↑&lt;/code&gt; : Vertical navigation&lt;/li&gt;
&lt;li&gt;&lt;code&gt;S&lt;/code&gt; : Speaker notes&lt;/li&gt;
&lt;li&gt;&lt;code&gt;F&lt;/code&gt; : Fullscreen&lt;/li&gt;
&lt;li&gt;&lt;code&gt;O&lt;/code&gt; : Overview mode&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/&lt;/code&gt; : Search&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ESC&lt;/code&gt; : Exit modes&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;!-- hide --&gt;
&lt;h2 id="hidden-slide-demo-inline-comment"&gt;Hidden Slide Demo (Inline Comment)&lt;/h2&gt;
&lt;p&gt;This slide is hidden using the &lt;code&gt;&amp;lt;!-- hide --&amp;gt;&lt;/code&gt; comment method.&lt;/p&gt;
&lt;p&gt;Perfect for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker-only content&lt;/li&gt;
&lt;li&gt;Backup slides&lt;/li&gt;
&lt;li&gt;Work-in-progress content&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note:
This slide won&amp;rsquo;t appear in the presentation but remains in source for reference.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="thanks"&gt;Thanks&lt;/h2&gt;
&lt;h3 id="questions"&gt;Questions?&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;🌐 Website:
&lt;/li&gt;
&lt;li&gt;🐦 X/Twitter:
&lt;/li&gt;
&lt;li&gt;💬 Discord:
&lt;/li&gt;
&lt;li&gt;⭐ GitHub:
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;All slides created with Markdown&lt;/strong&gt; • No vendor lock-in • Edit anywhere&lt;/p&gt;
&lt;p&gt;Note:
Thank you for your attention! Feel free to reach out with questions or contributions.&lt;/p&gt;
&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;Smith, J. et al. (2024). &lt;em&gt;Open Science Presentations&lt;/em&gt;. Nature Methods.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item><item><title>SBI for SGBW</title><link>https://avivajpeyi.github.io/slides/journal_club/sbi_for_sgwb/</link><pubDate>Fri, 27 Oct 2023 00:00:00 +0000</pubDate><guid>https://avivajpeyi.github.io/slides/journal_club/sbi_for_sgwb/</guid><description>&lt;h2 id="sbi-for-sgwb"&gt;SBI for SGWB&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Simulation based infernce for Stochastic GW background Analysis&lt;/em&gt;
(Alvey+, 2023)&lt;/p&gt;
&lt;p&gt;
|
|
&lt;/p&gt;
&lt;p&gt;NZ Gravity Journal Club&lt;/p&gt;
&lt;p&gt;Oct 26th, 2023&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;LISA &amp;ldquo;Global fit&amp;rdquo; + GW background&lt;/li&gt;
&lt;li&gt;Alvey+&amp;rsquo;s LISA SGWB model&lt;/li&gt;
&lt;li&gt;Sim based inference + TMNRE&lt;/li&gt;
&lt;li&gt;Results, Discussion + future work&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h2 id="lisa-data-analysis"&gt;LISA Data analysis&lt;/h2&gt;
&lt;hr&gt;
&lt;h3 id="the-data"&gt;The data&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;&lt;img src="https://github.com/avivajpeyi/dev_site/assets/15642823/af1e82e7-f1bc-4306-856e-b11e245cadf3" alt="lisa_data" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="the-global-fit"&gt;The &amp;ldquo;Global fit&amp;rdquo;&lt;/h3&gt;
&lt;p&gt;Analyze all the data, simultaneously, block-by-block&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://github.com/avivajpeyi/dev_site/assets/15642823/1577656f-3c97-43e9-bc4d-7da09c6686ce" width="1300" height="350"&gt;
&lt;/figure&gt;
&lt;p&gt;$&lt;10^5$ parameters in the full problem&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="sgwb-estimation-methods"&gt;SGWB estimation methods&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Noise model&lt;/th&gt;
&lt;th&gt;Signal model&lt;/th&gt;
&lt;th&gt;Noise + Signal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Aimen+ (WIP)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;High precision reconstruction required to extract an SGWB signal&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="alveys-sbi-approach-motivations"&gt;Alvey+&amp;rsquo;s SBI approach motivations&lt;/h2&gt;
&lt;p&gt;Note:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Current SGWB approaches use stochastic sampling methods (MCMC, Nested sampling)&lt;/li&gt;
&lt;li&gt;These are not &lt;em&gt;robust&lt;/em&gt; to foreground transient signals (e.g. massive BH mergers)&lt;/li&gt;
&lt;li&gt;add more comlexities&lt;/li&gt;
&lt;/ul&gt;
&lt;ol&gt;
&lt;li&gt;&amp;lsquo;Marginal inference&amp;rsquo; property&lt;/li&gt;
&lt;li&gt;Likelihood &amp;lsquo;free&amp;rsquo; inference&lt;/li&gt;
&lt;li&gt;More robust to foreground transient signals (e.g. massive BH mergers)&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h2 id="sbi"&gt;SBI&lt;/h2&gt;
&lt;hr&gt;
&lt;h3 id="traditional-problem"&gt;Traditional problem&lt;/h3&gt;
$$
p(\theta|d) = \frac{\mathcal{L}(d|\theta)\pi(\theta)}{\color{red}{Z(d)}}= \frac{\mathcal{L}(d|\theta)\pi(\theta)}{\color{red}{\int_{\theta}\mathcal{L}(d|\theta)\pi(\theta) d\theta}}
$$&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Monte Carlo&lt;/em&gt;: e.g. Rejection sampling&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Markov-chain MC&lt;/em&gt;: e.g. Metropolis-Hastings, NUTS&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Variational Inference&lt;/em&gt;: surrogate $p(\theta|d)$&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What if we dont have $\mathcal{L}(d|\theta)$ ?&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="simulation-based-inference"&gt;Simulation based inference:&lt;/h3&gt;
&lt;p&gt;New term for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Approximate Bayes Computation,&lt;/li&gt;
&lt;li&gt;Likelihood free inference,&lt;/li&gt;
&lt;li&gt;Indirect inference,&lt;/li&gt;
&lt;li&gt;Synthetic likelihood&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="algorithm"&gt;Algorithm&lt;/h3&gt;
&lt;figure&gt;&lt;img src="https://miro.medium.com/v2/1*oer83KfCCI1AnoqsRtYlRg.png" width="400" height="400"&gt;
&lt;/figure&gt;
&lt;p&gt;Compare the &amp;lsquo;simulated&amp;rsquo; data to the &amp;rsquo;true&amp;rsquo; data&lt;/p&gt;
&lt;p&gt;Note:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Marginal inference &amp;ndash; SBI its possible to directly target specific parameters for inference, ignore other parameters while still dealing correctly with the ones we dont care about&lt;/li&gt;
&lt;li&gt;Amortized &amp;ndash; SBI once trained &amp;ndash; we can get answers of the posteriors very quickly&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="different-sbi-methods"&gt;Different SBI methods:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Classical&lt;/strong&gt;: Rejection ABC (&amp;lsquo;97), MCMC-ABC (&amp;lsquo;03)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Neural density&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;Neural posterior estimator&lt;/li&gt;
&lt;li&gt;Neural likelihood estimator&lt;/li&gt;
&lt;li&gt;Neural &lt;em&gt;ratio&lt;/em&gt; estimator (Lnl/evid)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Types of NN:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Mixture density networks&lt;/li&gt;
&lt;li&gt;Normalising flows&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="goals-for-nn--sbi"&gt;Goals for NN + SBI:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Speed&lt;/em&gt;: Training faster than MCMC&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Scalability&lt;/em&gt;: Doesn&amp;rsquo;t fall apart with high D&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Pre-existing research&lt;/em&gt;: Leverage modern ML tools (flows, NNs &amp;hellip;)&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="mcmc-vi-sbi"&gt;MCMC, VI, SBI&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;MCMC&lt;/th&gt;
&lt;th&gt;VI&lt;/th&gt;
&lt;th&gt;SBI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Explicit Likelihood&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Requires gradients&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;(✅)&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Targeted inference&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amortized&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;(✅)&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Specialised architechture&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Requires data summaries&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Marginal inference&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Note:
Amortized posterior is one that is not focused on any particular observation&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="end-of-section"&gt;END OF SECTION&lt;/h3&gt;
&lt;hr&gt;
&lt;h2 id="sbi-math"&gt;SBI Math&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Skipping this, can come back if folks interested&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Note:
Library: swyft
Simulation efficient marginal posterior estimation&lt;/p&gt;
&lt;p&gt;Target: X&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;say there are lots of parameters $\theta$&lt;/li&gt;
&lt;li&gt;Only parameter values that plausiablly generate X will contribut to marginaliation&lt;/li&gt;
&lt;li&gt;NESTED RATIO ESTIMATION finds this region by iteratively cnstraining the initial prior based on 1D marginal posteriors from previous iterations&lt;/li&gt;
&lt;li&gt;this method approximates the likelihood-to-evidence ratio by zeroing in on the high-likelihood regions&lt;/li&gt;
&lt;li&gt;method inspired by nested sampling&lt;/li&gt;
&lt;li&gt;After a few iteraintins &amp;ndash; some 1D marginals will be mre constrained than others&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="loss-function-for-training"&gt;$D_{KL}$ &amp;ldquo;Loss&amp;rdquo; function for training&lt;/h3&gt;
$$D_{\rm KL}(\tilde{p}, p) = \int \tilde{p}(x) \log \frac{\tilde{p}(x)}{p(x)}\ dx$$&lt;p&gt;$D_{KL}$ is &lt;em&gt;not&lt;/em&gt; symmetric&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$D_{\rm KL}(\tilde{p}, p)$: Variational inference (LnL based)&lt;/li&gt;
&lt;li&gt;$D_{\rm KL}(p, \tilde{p})$: NPE (Simulation based)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;PROBLEM:&lt;/strong&gt; how do we avoid evaluating the $p(\theta|d)$?&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="kl-divergence-and-vi"&gt;KL-Divergence and VI&lt;/h3&gt;
$$D_{\rm KL} [\tilde{p}, p] (\theta) \sim \mathbb{E}_{\theta\sim\tilde{p}(\theta|d)} \log \left[ \frac{\tilde{p}(\theta|d)}{\mathcal{L}(d|\theta)\pi(\theta)} \right] + C$$&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;PROBLEM:&lt;/strong&gt; $p(\theta|d)$ is $$$&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SOLUTION:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;$p(\theta|d) \sim \mathcal{L}(d|\theta)\pi(\theta)$&lt;/li&gt;
&lt;li&gt;$0\leq D_{\rm KL} [\tilde{p}, p]\leq Z(d)$&lt;/li&gt;
&lt;li&gt;Train $\tilde{p}(\theta|d)$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="kl-divergence-and-sbi"&gt;KL-Divergence and SBI&lt;/h3&gt;
$$D_{\rm KL}[p, \tilde{p}] (\theta, d) \sim -\mathbb{E}_{(\theta,d)\sim p(\theta,d)} \log \tilde{p}(\theta| d) + C $$&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;PROBLEM:&lt;/strong&gt; $p(\theta|d)$ is $$$&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SOLUTION:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;sample from $p_{\rm joint}(\theta, d) = \mathcal{L}(d|\theta)\pi(\theta)$&lt;/li&gt;
&lt;li&gt;Train $\tilde{p}(\theta|d)$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="marginal-sbi-vs-vi"&gt;Marginal SBI vs VI&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Variatinal inference&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;variational posterior $\tilde{p}(\vec{\theta}|d)$ must conver &lt;em&gt;all&lt;/em&gt; params likelihoodd model condditioned on&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;SBI Marginal inference&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Can replace $\tilde{p}(\vec{\theta}|d)$ for $\tilde{p}(\theta_1|d)$ without need of doing integrals&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="end-of-section-1"&gt;END OF SECTION&lt;/h3&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://user-images.githubusercontent.com/15642823/277592172-be608f89-4e27-489f-b3ab-48011968790d.jpeg"
&gt;
&lt;h2 id="marginal-inference"&gt;&amp;ldquo;Marginal&amp;rdquo; inference&lt;/h2&gt;
$${\color{red}p(\theta_{\rm Waldo}| \rm{image})} =$$&lt;p&gt;
&lt;/p&gt;
$$\int {\color{blue}p(\theta_{A}, \theta_{B} ... \theta_{\rm Waldo}| \rm{image})}\ d\theta_A\ d\theta_B\ d\theta_{\rm Waldo} $$&lt;ul&gt;
&lt;li&gt;VI: have to learn &lt;em&gt;whole&lt;/em&gt; $\color{blue}p(\vec{\theta}|d)$&lt;/li&gt;
&lt;li&gt;SBI: can focus on specific params $\color{red}p(\theta_{\rm Waldo}|d)$&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="truncated-marginal-neural-ratio-estimation-tmnre"&gt;Truncated Marginal Neural Ratio Estimation (TMNRE)&lt;/h2&gt;
&lt;hr&gt;
&lt;h3 id="active-learning-loop"&gt;Active learning loop&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;&lt;img src="https://user-images.githubusercontent.com/15642823/277889707-8e9f5955-b8ac-44e0-8067-808a5ad189d2.png" alt="loop" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="network-architecture"&gt;Network architecture&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;&lt;img src="https://user-images.githubusercontent.com/15642823/277868586-284becb9-8f47-4ed9-9a92-6a3e7683470d.png" alt="network" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="truncation-example"&gt;Truncation example&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;&lt;img src="https://user-images.githubusercontent.com/15642823/277902380-7807ed9e-99ae-40c4-b242-b7e9328306ec.png" alt="trunc" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="alvey-signal-and-noise-model"&gt;Alvey+ Signal and noise model&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Noise model (only amplitudes parameterised &amp;ndash; shape fixed):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$\small S^{\rm N}(A, P, f) \sim A^2 s^{TM}(f) + P^2 s^{OMS}(f)$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Two signal models (one chosen):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$\tiny {\rm Power Law}: \Omega(\alpha, \gamma, f) \sim 10^\alpha\ f^\gamma$&lt;/li&gt;
&lt;li&gt;$\tiny {\rm N-Power Laws}:\Omega(\vec{\alpha}, \vec{\gamma}, \vec{f}_{\rm range}, f) \sim \sum^N 10^\alpha_i\ f^\gamma_i\ \Theta[f_i^{\rm min}, f_i^{\rm max}]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="base-model-consists-of"&gt;BASE Model consists of&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;data(t) = noise(t) + $\sum^{\rm signals}$ s_i(t)&lt;/li&gt;
&lt;li&gt;Single TDI channel&lt;/li&gt;
&lt;li&gt;12 days of data (split into 100 segments, 1 segment ~ 2.9 hours)&lt;/li&gt;
&lt;li&gt;$\Delta f\sim0.1\ {\rm mHz}$&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note:
this is ~1% of the full LISA mission duration&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="model-with-transients"&gt;Model with transients:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Same as BASE mode&lt;/li&gt;
&lt;li&gt;In each segement Inject 1 massive BH merger (priors below) if U[0,1] &amp;lt; p&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;Mc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;U&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;8e5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;9e5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;eta&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;U&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.25&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;chi1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;U&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;chi2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;U&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;dist_mpc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;U&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;5e4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1e5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;tc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;phic&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;hr&gt;
&lt;h3 id="mla-training"&gt;MLA training:&lt;/h3&gt;
&lt;p&gt;&amp;ldquo;Several numerical settings should be chosen for the general structure of the algorithm as well as the network architechture&amp;rdquo;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;500K simulations (9:1 train:val split)&lt;/li&gt;
&lt;li&gt;50 epochs (512 batch size)&lt;/li&gt;
&lt;li&gt;save model weights with the lowest validation loss&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="results--discussion"&gt;Results + Discussion&lt;/h2&gt;
&lt;hr&gt;
&lt;h3 id="mcmc-vs-sbi-fit"&gt;MCMC vs SBI fit&lt;/h3&gt;
&lt;figure&gt;&lt;img src="https://user-images.githubusercontent.com/15642823/277888874-1ab882f7-e3d1-47a9-a542-96101b8b92b5.png" width="500px"&gt;
&lt;/figure&gt;
&lt;hr&gt;
&lt;h3 id="some-thoughts"&gt;Some thoughts&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;The good:
&lt;ul&gt;
&lt;li&gt;&amp;lsquo;Implicit marginalisation&amp;rsquo; may enable focused study (without global fit)!&lt;/li&gt;
&lt;li&gt;Fewer evaluations of the model needed!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The $\tiny{\rm bad}$ not so good:
&lt;ul&gt;
&lt;li&gt;Doest use LnL even when known (no gradients)&lt;/li&gt;
&lt;li&gt;Requires robust models for noise (
)&lt;/li&gt;
&lt;li&gt;Need to model &lt;em&gt;all&lt;/em&gt; signals in data generation?&lt;/li&gt;
&lt;li&gt;MLA architecture&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The ugly:
&lt;ul&gt;
&lt;li&gt;unfair MCMC comparison for data with transients&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="future-work"&gt;Future work&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;More complex noise model&lt;/li&gt;
&lt;li&gt;Longer data duration&lt;/li&gt;
&lt;li&gt;Additional data channels&lt;/li&gt;
&lt;li&gt;other &amp;ldquo;SBI&amp;rdquo; blocks for the global fit&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="other-related-papers"&gt;Other related papers&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;</description></item></channel></rss>