From d359c54326cda4a9ec68699764c1d3eebaba1b0a Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Sun, 16 Mar 2025 18:32:34 -0300 Subject: [PATCH 01/10] Fix time domains image --- doc/MastersThesis/img/TimeDomains.pdf | Bin 23719 -> 22010 bytes doc/MastersThesis/thesis.lhs | 2 +- doc/MastersThesis/thesis.lof | 104 +++++++++++++------------- doc/MastersThesis/thesis.toc | 2 +- 4 files changed, 54 insertions(+), 54 deletions(-) diff --git a/doc/MastersThesis/img/TimeDomains.pdf b/doc/MastersThesis/img/TimeDomains.pdf index 95195295a7a1f62fcac8c614f7ab6bfa3dea088a..d8aba9c6b799c6fcbfec5267d510831cdc75dd27 100644 GIT binary patch delta 21155 zcmV)xK$E|xxdHmC0kE9`3pFq?GcYhTFgY+av#0?90R%8PH#L*30wjOgj&8}3rFVax z;(vjN@_7IRU35!oK!604ihwQ}x=K}vYUF7^DnNRD`TCamj+3l`S!6I#5Cdc$xA1=t zzI`-XHsx13|9i{lpVHTR&NHv|`sw?(|La@kC;xjY();Yt55ZL;zJD8Yo$_bO+S+^W zqxAE4`o&+r|MtJW{_B4qKKGxz=ufEloB!_b|DXT!r@#MS{?mVW{l~wfN$K@>|M7qR z_Ba3OAM@+~_{Vek|NE`I{-^WHe>wmCkKg|D|8_3;-@lcV-qU=I^q$W*w7Q@s-+R5Uj=%n_SO5C) z+WURBS6+AC%#T0+nTb=+)ph22FZr4|A2TPq&m)Cz^zPx_I%m1zW#q;v`^EVoE_p`eDhm*#C{;NlgFZtRrK6Z?ilSVpZWcCQ(pOfsrN^!?|Z*WN$*|DA)k)Y9#s3Q-<|0c{ptPF)8`qv zASs@Xu5`Jn=ePdD#9zMstMgCGmuJ4(@%>vLAAkLP|Jwd5u8hBa{}3 zei~PD{2hPSKY#yL4J`ea4UE5^2G;t^20nlP#TyvwYW#9p<@Cd{%KgV>#rE{4FRSsj zsvbUqL;T_W{nu|K^o(D!Fyq&-F#kg3udDjoA5gjRGAcJ-wUu9>a^)|G%C$D%lO8_K z0Jl107M?H9B^x_@`|*!o_U=#r7`^+CKmAwrZc%^lw$tXM4m?CIM{qkFz*{~C(Q4cbvESzjJT2tGQjL#nh-Q z#btjh@k?J48lm>&x1npg%X3W~uBrKKHm5A-Z}yQ^nl_r3%6|5qcDk!iwWXdO8;6am zwQz4a@9@*q(L61kSFKl!!%t_{^waevYfMC~{TlPE{&6~=;-_zGL+oA7DMg{nfBGh> zwtQ~cl6AR3%}N`R<7tE{JtW);4pcDMT6kv!7HpVO=M4ebxN@O`56q;uxDEIB@>%^7bMRd&6k zTAeim&o-dXnGouLm0VZOf!ER*9ln0G+dJ+2XX4D`oTrKNV?Gy)Ja0&vIr>=x^g$$A z%ka$@HV6%z2tA}t$e&HR?RxTf9z1{Iakr!o&a@ck@p{E~lRR0Qor#w2D`r{u6(Y~z zU-CRDBb=NsT&MS0EqS*ps-&WUcBz|w`ew4Vs2kd6&UzEUo;0FP8_>DRM4A5aEcx@7 z)}m_HS?)~y(*W0~%y1hbYPr8f)Az^^sbgen%S`!$aBrK`2<#bSqB~>A^t^uCn{cV;s($v! zGYL1u`I4%=L3Qn3)#A>dzL`9$ldHzPbHtsog-VZo$`O&zyhRQOkJqD(Hi zKv{4z6z+}XUG@B)UDh7GF?Ws@i1O|VsonLQ=AE6iYz`1s*d9CTR%L&8h4AhQ>BsHX zF6CYI?9}xs-7HZ_dmydW9q__wPd=yStW%<9zM6vjptRV;#;Z(w_;U`kSo~Hf@2-%X z30`^Hbvg7WUDLbkR!;1${(5#-NbRmBG~Bb_rCOxHjcGArQ)PfY2(NdBZ^p2hn9sDB zrQB&D(mgijx|DZSo^gN6m=-n12L9coo>Fsc;I%v(crDKc4taL*hddXbd$pII;f~$4 zw(|njw0S=DRl5#Rt;<7ct)0U%QMBdZ8pUUO_3W&J!;pfGs__x$_ua^uV^BApAWg`4P<{lQe}`5&st%-uT?Cp z4gmUynp3Xmvs}{sDcUWPcGv7pwU*~7uH`w3&lve@Jb(`on{S=G5@wC<4 zV@uQQBZS#ONO6CNXIF%Ssx|Qe5h4h3r zsTnbt(5=;&&}W`7p_`bnCgCF{^r1ANTVIJ{dqTG{J)zrJn$Tw*lv^;EP&%^58ZKu{ z=r&4BXedt%NKK5Um7)PZZ@d$_^?WDvnW>)8M<}d65KMmzNIIfzPw1u-K6->ta*v4g zgoYd%`bqz{)?z}_N9=op2~Ft<4H@kgNxP3I?)KDjC-fPQJE6}Q`D@xe9iNe8SYeTb zGeJp&n-+Ik%xej2)q}Pqte%Fg(PE7D`O3G`MLULdpN*V*I)d%_jw~^Dy3D2XlLXmw z^7nMUvYdZIu_wtle0)zyoWg0wUfPRKOT`7AO_$dd?e-EVyUxFAqFP*p)ULCTI(S5n zHPO>pkY^jw(tsK?S+tn9R0n4)7=XO}*u#hC z4PPj(a|ID&aLDtqe7fe^zT0WrZBet1bgL=0gmN_m{WYW=D(>~H*}q_W>un2l5NGt; z2O};aTc`V}_#UL%197)e%_Gu^H6_@=raWl z@yHP-t0b(DBSwGZh|xoi7~@_w9%;bj7T|vnas+pdxHIGkKcB09kRwLDas<@*wIN50 z7IMVd>0?YLa>SU(5fn9G7C2>or@V3m?edt6M~;BhmN`b8_oF{@#AuHkaT*Y)Qw=%7 z1abtVc5|*W@yHRQKXL@*h{xTwG|yDN_m;shj#YB>#)4%V3l?syVU%WU#E0{gJ?MXi zz4I;H8{5*t*wZs~*V`N0(jPZh?cJwAdl{0M2_{!FaHiIyP5{*T5909t{Mh3&X0m`X zQ%+Aw6QI(mfY&^CcyY~KeP*BWk?b=CPDbJTvX{`zUcfWw5P_iFbVxOQnxQ_;6yFw2 zFJ(lXhCodh5lt_5EN0+UGw=MA(FA|G^IWhfs(XFbY%pmGLp?tTjf%qe1w1u@98X!U zxC2#F<+UQumJ1<)<~036Nay2CPWZASKv43*0@ni?$}tjLl>&ug#2e(98%x}n$2=zL zt}%=Eoz82wJKI-qw`WGIHsj}DfK2!y%*x4ZjM={Cy0d*#wl9~w6qVHu6a0$^BNICBsar0WdymIEmNwVL%I!@C}rJt9dSgj_7Oct zoYi^(K=`TCC)Z1;8{8*C7khuEPc#EH=sGQcf%O=(EmOQJbyLf>^Ldp!^_YJMebcW~ z{QN5wCH41I#Tv}_9iSJX=m`2(=sEN(?Yd_>+rV;~B(JUApYwRcX?*??7yPzg zXIp*KH|H~T-a;=S%%i3n(*K(juL+)viWbZ0*xXN zXlXnO*X1SF=G*BrTZ4m;h<_ z^2~tek6$qb$`L2emVV{sOaOeG6cT@ri;Wkt3YCCt2vW9}?3HV|?0Ql%Zn zItx^gNH+|-iy1@D%_Hwi{%}!#!eE`f<6S%E``Tx}{aTlQP3wPB#=VDM)v%v+S<|O1 z%QJ*ne!kv>fud^eSNFU+ghZPxphB~n-jbO?0g5jy0}r>lc3N94m?`wv0t-}g=G9nkA~yX)8zv#Ean+biQWjq}$Z@>fa_PVMy4uc3LG!9#owY z^0jOV2eeS*(=%QYdSYWq^vP8S>q|0oj_2goT_bcv7uTvk z&S*4tuq=hMY5?EFXq|&!@?u?pU_F>YK$6I6-eufel(d$IzC-v%sRm5d zcxpq-JSl&p*_sNgffq?JVa+g{=h^Bk^h0SE7ua$p@01}1a{H3~)%kD56)WyJzlb_p z-U4FvJ(l$++vcf>E9P_0?`0ttk}VpgVPQV>NgKH3=WkN*YGX+8FC8G_G#uscDeCrT ztQy)Y5#EADNwi;~QJo*6PJbkwKteT6PiZkHhm?O4%AoPU*>m&|=tYGTA8#L4osAIc zuj1^CT9!yHP!;M7C*>Q0NeMN%PDN^DZK9)7x&(FvSRSFJ+gH?lRqM@`DOoVeoGuOM zK?+pORk*2+P_%#6k!u$fsG7BV3RJ~x!JujuyCYCFXEcS?gb_xQGSbj$tH@pXN+)L< zuC9M;G6icQ^dq2ZmQjvC)k9PO)ZAJNP&I2$b)63RbCU_1XF#%nLSoGXA)BcddJ|n2 z>XHl$UU=0GP&L;u!O1m_@vI#>?qt(dod;?K5uaSpe4kG)6Ht`_FsPcf5laxv(r*b= zHQ_N3xm0D<%IWqLMhc*JGG;i-R2-2w8iRkTS&WAOjclGr-QQ<`s-8a7!J3pH0#waI zLyR^G5>xP2!X9IJ(j0bZ+8#rjd2wmc+9Wm_xdfpRvt`TP2{LOH<|;|pt>Z^*A2qiP z7yoFGA+THq8M10GL6kVAM2hK~EvD02&ve?DW2mcJ`jPN~ ztt&dhqp%iT6kBkMtr(~lrs~lsi^_1!kgUxb@Z4F-7oI!hS#P)u2qBPau&W!H4MmWK z>tsg8+Fxb{^rOhFO<;yXUC2i$*;{`L#e1D*bZJF}S2tHq!+A8D=uk+v$kGo~{xmLzXfigkrTjyf` zJlkjr<$3brsOd~m%^sCr3MSFtj}%02u@q{mBkE)Tr9&ra2C@_mtG6Y`lx=?n-g#Xp zRLe5nR3?k1+djc4QBhxx#{s$4^e$Dp7-tHG~DWHPBa>!wJH zSXUT|(NC@uQELzwib2Nnt|_6&un+SHJ;aBt!d()O8$ri|piMMv6?T8ItwLoIO|5?kr?yJZ+S`sY-P%kOsPrJs$+z9>kEA7nokE-0R+us@6pt>g z=wF!bo(m+@a0mM4G_5)7+CHR;Mw3Pnife1i4%e7m&d?S0bdMz{HwvBh52Ur3CQ&SR zYcsnnoI;T+!x8Erq{xM^B3GofnUKE__wV55YBXUZYwUX=ZRlQ%Gq{*#Osn%w4rzISoMqRsCwf>>a>|vJt-PHND zBGL3~mfXjPF&UFFnW^8xV%7{`(*W;9 zy~h4zBKf-5?YMu+#AE)}+rHXN$PtgbZ3)d+55w&K7-pZ|m@~AQQ2I>lclyd^oKF8) zo7s1^BEIEYxHm+hxi6lfYu(-uWtAYLRDyk1vrB!?%P#f3t<4-h*0u?&1R=vzV7c43 zMRCe#0w_V>7#fWfi)*B1ZRU_Mhwj1NoFYsGyo5&f0v>;9^lK_Gt~L`AMQqM)@yPvZ zGe@2y>bA9+5YhB}$6{pQ2-zg$r;H}hokIwjiWNHKGOW!U1?$vNEs@tNyF%$n#rVW1 zEwq_X)@GX2vK4{;(JbHe$D2IeWNqfa!V_)gNHw5Tb-%9adtPm3mx-zClC_x(Z%ot` ztY*KO%f5fw74ex>G25?+HuLU-@I#oDli3)veU+=tgdFhX4YDnmN#BQu7 z57i-ALWk<;$ub;|@u50Ku{LvzWo_p9yuXAtGq3m6(PGN0I=3&y$zPe`la~XUn z@8?|n(r12>KJ#Dg^!;Aj=dQY6F+ED{@6F2V&wKAk>IAzt#t@$RLHXD5 z1EGIs__q)0KH--BP$Xy$+Bb5XbSZjbP!n3X$E&Z#AKulAoRi(M)0BJ#nCvj8j@vGd0x;R056KR@H!OdR3bru*mcM#qTbAvy_vxgd8c4rdAKH_% zx9Ti^4Fl2uw`|t<+j+Ond9hum!{g#kD~n0H)6S@o$!2poFFOPpc zT;guLOwM}R#<$d^XX_Il?2UZO7VHmp^3U4M|Jbzs55N2NH~#}0+wcD2{QSHBJ09>R zZAN`hod2`EpHV%;@4kQgZ_3~Qx3Ay*mme?FYWm}st^D5VzjWDu|GQ6OTH3lcIC+ov z@x`zFHSsIGzH43)UoHJY=l#X5{H1@`mA>E2t$fCE`!$F6=8qAa#a1NE~UZrcL%mtlsfs`h_jr_}Io zk?SCcqI47iGGbX@0)$+Y4OK9#l=utjDRm%MZ5N*NQ!SE7hk}TRS6Xz6tiy>GVB88W zhOiD*$6A*C6XS>x793^0 zPuIYdCY_WgIo)t%MyYq+|)#;@V(&IkSmT%E0V^Zqj5 z{VU|^elXuvsO>NN-7ovyzXJQ+&)MiL2mjH1e*e*{w9Suy;i}KtG-WGk;5?yH1K(+~5rUhV_*$^FKvvX`7x zc(yz0`)h^b;?=U!0X)~T35p8yOC+~bC z+{|`0{Je@y9qPQrFP9_fjYFl55Jf2y#Cd~IeFVzsJ3m^~9u@`ZqfFPh`SIt3)DkTA z=gjr`bLKX}QrUn0oRrppnv`8>3x7^{&oDY|;m>)tRcknb<{bW< zxr9F_kGU4p;m?^{_;a4m`_O>qA+xmXh-}i&=Yiw&kxPFZODanLMb_AP3d z$%Y3(anQW&fO-+TY&@aMY;FoEbJKIS*x-P$XA!9nbXPXV7*fczG=<$Mro$!bguTn$ z)N$(IH;l(iKmLHqNi{o2q6w_F>WibY_uJiCExH)=a9UCS?G})E-kD~ z>9Y?a`MQ7P)kaT7WV>tM@`sV_Mg(HJ{WNT5$}?_x%}q1b+zZR#L&{6O*qpIwxNUh| zyDhKz$Wvg0sUHMdfcjxf+^1b3ij^&80A)}_EGvhaD1Aq`v}+1ey8W(;7QLW7;U5D4 zqGRmY;a&8gu6Be*ztk|2V=u0*6Sw~mVLA;d%V2*3R!Q-Ko4t*$3#xwA4V&&4At`Yu zq!^hp!;V?r4C05>FDYzL#q60VH+m5<-l&7K)5L1n(8goMvbGnUUP=&|1^j!(f@^nF zAhjwbX~0hLm`nqjcMlk1wvFtcWn&{YY(@Y*wwa<`jj~+BUP(Zr`dAQ})DB(4 zg0Qy7g0Qx*AgtHUuzuJ9)@vJBTUZd*_E-?|H0tWTs{UX>Sj)8_gpzGF7KF8g1tB)j z5rmCcZv#eOtjW-jJC&uu$ISyu$CJNt%L=kiR6Fl z!h(>iOgt8ZwLTVvkRu*<+Y(0pHTb~W17DHWJP)8E%vuClGgF>zfY{^#r7fK}D-N z31MrF4ni5tBFs2gNA-am-w=7^FJ+)WnAVO!Wh zIc>MD_=(ij<#@gRdLl|XrW)+Y7=LO^wSQA|Jh%Pvb>YDy9(T)F5MPnnz9O~#7LnTi zT_XNxZ~aw}{~x*T!TkT(>psntBh7ccVt!62oi9cHpE(?LeCBX8iFslTL==Bw(+;pV zLL$H9xCK@~vCW<pEtRHtnKesQ(r}iZ&1Asf?o20-!2aF?6*9a$0QUd*}j~ek&B2a0_3uPh> zNwY84i)qQGg=W9uRva-~BRqeAHTw<%AdtiJT*9sAyJqY_d)?9OqgAqFSZMRdRq$&S zpw-{RsuTzYlP^71#8Y1*M1lM=6O}`$59hfT>IN{QH6Eq@=Z)F^-1&cw70J8bp_CQ` zrvTQ#LY<qHXgzFjNSjpRP%L5^HXQsVVXbn00f{! z>$!Zy{R}02+#qO%j7TE2N9@TWyrmG<(f?M|FNmf3i+`md^oDYXVq@eM+qoD89aYlk zQDn~`Q7icMmvJ+aBJ;4!_QTWKRb}d26Ln2$1k5RW)S+1KD*b=N-}7a<{=03ndVFl> z#!noik;?nwzooWP8qoU7k?Qqhq=NnxBc-Xn7H4CqJS7(|TTV+nxAs}OS1ogu0>w>b zG`lXVp?gu1Mp!~2DmOf(T3A1HtPk+UeO>Wa!tYu9SU==z+ig1_F(w6Tw)H5+lTV%= zMh_Fq32dkb?6iM@>1wN=YLf^>m~nMR6MLqKR0|cOYdu)T>;-Hu+tIBkMv2hOlEXt5 zN^pThM`@7vxH{!&-844cc;R8WV2E?QBn~T<(T2*h{*g)nD>x!{TD`pLZCQR zh=Zyc;X=a9R8C{e0**Bm{M7p%B+qoM>t(p+e%HRQL(6~BEL4Q!yKL*VxxZ~p0^WRR zVpy-x7x?#vS-rq(OJwyL)wdNk^-6Gw437OgLH|b_A02OX?Tji6Qe+#`>pIzD6|W#=`d@9Ww@^=31A5?aZJeO@E_wrxW%o3n1`<*+?b^KE+~kVLj8%6i~LRo8mH zrE5IT=8=CLl!Syw<=W1JeTV&tkRgH4-@i#r-e#RY;+1%oC`yjHFF!B*oe>H5EXh z?SU5t0cYjO*YNHW|1JeZ@m@%24I}yodHrEnGQ)prMH=D>wMc0FlKcJklvB11;rt3y zwUH;}{;kl=xtJXUSIzog`U} zJ|cez6Cp&jk7%66>nb0UQ)?w{ET~+;u!88Jq&y@EqGG(V&i#0O4ilIxTmdz!(Rnx@ z!8&z?jY@jVRm9+V_g0($H5F@8@8hwd`}RGZCaT1CC&n3i0MQ1@MinEEK3d%aHjRKGEI)`BsNPxv84Nj ztwz?4YIV9;bzh@b!Hd!8w{q{(I1g-CFm+890(Wiz?h}g|;OT~YVdezCKB~tuk>fVw{es-ynx6SGbHj+{VDhDF^ z$V%WYsaBJX!ysynTvmg%TB;*}NEupVITkH%>h$J%3AM(@yrh)Y&}0g{ZY23iK$s>R zdbL-lCm!D zllgmH&Y?w{ZaZl-9`r1qLhKS-Xl^N^Fi%oYcOn!ZraA1AmN&t|2?Ii$qY}%9>pX>z zG(R3v>x1cDwhZWP;HKE-S{P-85bxw!ET1w%oJal2RpNrYdypy{ z+I-GI{RO+oNPL8ld>(i^~W*RW+fF+aW^{rkKG+u>>pp7^du40I+@t1QZ>(JUYClWW?T&Na3!RDyj-(V=Mvx8>zibkO-Bi&%iDnuQ@cr5K^|b)R-c zmnQC!DZJ{5o-cHdE|I5es@Y#$m`3<+5hWI`+!zslm%Tz`KR&h0vKD+FQ2 z-{hjM!1{B!d4Cd%1sAG-MC4J^fwx^+6)H+ror?XXt?7K8c~Kb*P=oh@*u-sNpcb76 zNCPM);X|tqW<^%h8H{BcGnQmz~%HW6T9mwkxr&F`Wnjxl_S46-B*~zpbu@h zb%^M3-9|X5O0|idYH5G>C_ino#6S}WPs(8&A_9r6EF)BNGT3e`qy!Vq-^(&2U}Hrr zsD%v9MlRNJoiL6;I4Tj-T)xOn{F3|pc&FtB?UM;$_$Q>XlOtdGJ;TAEfan{B!SEBt9nlN_3eH=;XM%J z=8S1n#nhW4tU7Jw1P>RKB&@4U{!U}kZG9SJ=W76ZR)O*WDU%>u^#Y`10`LRyN%XX z)g;Gus>=BXXnEL?NYpkPT4%l%^p$L=&Sq;1_9~I2diZ|`(S^A{j=fKLQKMx{H|oS0 zq;4X53exMKA`{fj(O_V8=ZKp&J0T1SUgr6VTUK56_^3GP`++sPMtl%<^xZq&@2w8E z=xJqp!P=QK7-P(*Y zy%B{M1Ls3|I$j zV?#@#aD)U*WZbEfdxfa-^S33)VTfZ?I-G{GQpcJUDHDqi2@wrsQ$e0OA4%4*BM3JN zdA9CX*vskQS6Fe(8!A4XomdtHB~gxOU@L#d@Md~<(yryQJDi$Nx4u1FzLn_nGjr>TzseEyDN(OxuJUSH1PTsNl$~qNw@R> z1&SZ`Ub?eKGf9Vyg#)}S0#Hih0hC%Sj1*kMIo0m@m=mZe7=N}-K!ndnlR~}{;a~ep zT$~1C?{U$j0?q)^)a-(6m}}YYH%ot!laZKp3|ae!BCxaWb0{RrcyttXMjifx-NaqV ztfUgClD4G!10IFW3Z0Y;q9#jMqwdn?>}2wjO|B!*XxF#iqB=TFmZ7K_r|H4V4ZGZM z2g{+n(2Ue6kB0<4n%8MF&U8}OvqU8s_?KqQxh%3aAF*w>kQ=dm4J;K$aL#`+i4wPK z!q5=pApCnc!^!yn!k)d}?X+1@CYQDtOH{404nf0OaI2V{Yd=MlcEg@oKy<+oBpddG zXnqJ0SaolEAkNF;^E~PZXfxy$-ycyaD@c*r_1&cAvuuLjUxmWG9M(hj6hOPBf_quKS*lnfuxoeNNTAMl3L1xq?Ym^siizfDgn;}3Mzr5 zmJ&!RuwXzzHIUR&14$*Gdq6?$K~hV5kkn$aN*98pmU@xY^MXK9d6iL|dXZG_6V_vI zz|sOq#h*cc8%aGZY9N29rC%hq^oyjHev#DDjHH%kB(=1Qq(Tj9KvGNlKvF-?a$vqY z2GZW|)2`EBD(Nqk^gmHa%cSms9xpi2(ghx;xwg7IjCIXN@-}Rt6TBstHDbAR;}aUk zW`kqPA1NS!&H@sL*RYUuEb{H(=j4}=>@tF#AQPQ z0^i531gwa1HQ(7K7Z3JFjTD8m+$3gLW{Qm(b8J|ekm^Rcd3MaPx$YdBJ?0oYvyzK4 zxwPv$C$S^h5p1MWI6mM(_)>+dpLLtCg~Kq+k}HhL}@JdfK-= z#YRYAvd9yk(1U*~7l)l4gI(wo(sB+S!BI=#;U!WROy6M;C=CRNnosz3Or8QiXgMIk zV)(7>XI2Yxw9$i98SZ;=&Rcd{I5w!D@A(@Y`RVJOm-}AmjsBKuC7TCzAtSUBO5K|sDQi=>WGgoc z4B{$bJEmi9oJmn`$eGl90A@wNC*(}h0bqVrUlYne@$W?7Nb&$Fg)g=l#2z(XwIeal zLQr0>HG7yrNY1=w5XA8@8l}*dMx$UV&C5z&b3&hLy*S994$v-Ph|*}3Np$S`M{K{4 zccW2Gek6YuufhnBv8M$Zb=t5InsW!OEGCRbnZ&F)#eVn^Q6d8YZP@q$8kGyi3l;+i zJuZoZtT!o$>@+`!;o^*I1n>xzBmA?Ydt}1&Oe<^yjmk1&G#Zt~8(?@H-bx{=7=^$L zf!rm#M9!pqT|m!)a`Pxj>?%PRjWQ{&UCX%OC%}I@{aEoR>fkE=!6ME2Fq;s9uDrmnt6cMM!kkQHTXI4#{LS@r z4Z43Y9nV0_daT1H8@$Z)J5rd+oa~;Afxg^WmjiF*N!lpC9#a9GI`;oHq4|p40ZxEo z*jKo9Yded?beOdYx?UxS57%5WDapw;UDsS`SE_Z^xW8Z{SrQ(bW>a#=W*f*;-Jl7D zQE8Csg^~kjD;H?{z1^-^?gE+KI66@>L3MYXXvykXPIGT8UrhrjM5Gb5 z(xCWCsBBIITI)no01L~F@Ql?-&=xJ?$kOH^HHqfTA{7sVyD(i2AyFV)R*ZPZv&jNs zs2elTIfxbZ2#a_%OmROy-f3P+0Rk#|e*^(LDRTf~lUkf%-b}kZFC(mvAf*XP1r_v^hK zFw5z^04}nxuMY`AWU_+#6igcB!6(xcOLaQ52=G`j7OA)gTM$_`A0Y;loZNpgk#R~W zfjZ#CRlQ@NxF*z?m^!CW^b=09f)$fIi}OLrb&M95twn^tJH9b5M0$5SX(&~b08c3` zB8l4kLS{vo*5ABDM=HOzaz&6zBMic{C8-J9xj;@{?ER>${ut-1P&yP4Z%rEP9YYJS zudQ5>)un}xf+I1~N8zl~=|+F%3AGO`uzf*h>?Vcq7D9&{^1?rEgQkaY@F*H!Fqt=i zCfg`OKMLk*E}d;o6dZy1h|`p-)xs`@x*gM^CSV3<0Nc&DauJ9m2@n&|;nLNC|fs~jLJe8Ua(po)Qe<&s7*|#zt}487qyW8!k+1Z%YxLKQ7;QRlF}Wb zStSnHZyi^q@HYuhepz!6{1KCgeAA1GN1}=_jxNK?UfbXcvyeC9b-t6bc@0o1V_V79WQaGHN)FPpPP z-L{<&gjsQu*q2BjWJmIG4N_n+uxC&Rgho2O`LN`(nm4bKMbsG?QYjsRuiB-de_ti!GRB`5{ zT%6ZrEu?D#T{s5Cw1`3Jg)*%{ZQ(Z1epr05%ZXpJ$=`!j3E46NtEY(~-sv>vVgwed z80Hb)ii3?!Y?iTD=>UYlQck=no&(*A~~;DGdEpDe7OF=zymv z24R(@Bb2rYlw05bx7R-iO!Eu;KaA&dv}`ObHb_xB$eWKVuET ziYBX93UGidg|)p?a_~ao+3V*k+gU)r+lPP6zz{K^PkeN==;a*@9K)A1uSVD=?+(sJ zdR}KZsp!%bVO;u^JgzQXraoxb@wT#Gh#^@SdKpHlPO5^oeqXJ+HfUi=k`jePN|X%G zoDwy1<8Lkv50xAVvK@Svbhzmvv*iIjN!NbKeIpiIX0ZS;Sf zHdF{?w&s43S-4&99d?V(4liS0q5J^@Bde*anzcG4go$9_`N{t*xTamE5G}RwS&7=6 zFwroDSyniP?#N<043-L??ddXwCWRdKWTr4Hq*mawy_hLH;)9t&!k!0MC>RiYb}lo8 z5P{D^c0vV)DS#Z}u3c9(Q<$X-1)qPFZE4`M*p?ApzL+UAVSE;TjV$zm#rAfY!Ys^# zxIobGZQMkH+cTUr8OKE)8q!b4-awo-= zuu|i*g~Y|e7Ygno)r9d`lbwIKiZh}krV;pTDPczjVr2q}z_tK7Y^fWLLRR3f3|rXd zblOs2TzpomUmJ44?v|K|y9GWQB#}&0_#@5nfzK)$D314d>UBpfNeO(mDD)D&8I%L_ zL1UCExY$UnroqK$7o!322^S5LD=P3=XaXWt`-?^+JY#zfe6|n^L3@9JA%SAT_$;KL zypp!DSe)(xpWV_Qh;7*TK1X#WXAj1YAo0wgt63*%c5?8}cLuNq#)BpsZpCc_$v=S7L;=7d~{n@SOeK`iQF^$BG@{4ZNa2qlsc)IXO(=bAlp@8mV&&oYRCxha839o znE+9@=tCYC!?PMsaB!+sl$pd?iesR~~J-lk;aBNLc9@|yvc1j&*Gv>i23^oN3tx}3$EuyS+5S)VrP2EoI z)LTS=k{o1nLqKVA;UeMI zGn_k~V%)EY8qzd?6jkmyktby>%=)SV(F&mM@QWFkjsg(R{@R zIh^nLs2TBm>mY?~yX1}P%Xd4hZ`Dor8Q8dfa%A7L2HvrM;i>sOwO^(k{vjFVuTDi- zhSSj$d6trcX54gi!$!(1XRTZRYg<$cY}jhU?L{8Cn}9|{gg4$fD6D3ILdcCo*&N4J zA&`rc!49lgb0ngaZ6|psaki7$`7NA=vN)%uHjGZoorLm9gL$*Cj3}Kt$BA>wxY<{- z?NNzUKW_Ga%2|REEJBV%B;D-N?Z#ZN1Dw>b)~m@@)ZJb69YTKkn&gu65<;~5b`s!U z@n8Kf?U^jr*K}KFjU=D=zVI#TQF8l;64)aAQAgOyBisVk@S(Pd{<3E9i#YwOs)egLiJevj(q>{(5ml5QStQlMlY} zvP_n-YXO1Ln54$MOnHg!2{JePFtKX0Qy}W%#WQBv|Ai@e8B99wsK0m3|vLe&5(>j!? zBbsJ^7nbdn5w2M7e{Ic_IV#b{@HsnfkON1um zJ|8Hl5QsS$3X<>e^y8F{!fzFhl8ohKP^rlM1z`2$p)9pNWOJ=iq)WudmPlxkOWQ%z zdOtSjaX*%HM!1yE@dI%IHxkJ0bGecXleosBaLaQ>HR7f^l`FAWg~KFrN@azfsS%Ua z{&U(?e{_ZR0Ou-9iJVeJLHFb?nCi_b1*x1;Spmul$>QhDDV0}uPN}R-g~+~g40BGY zJg0L?WrADdLzQk$DU)CaH?psENs)GShcjE=)j6fIEQg&_3P-(4bc^boQbh1|PAS54 zh=d@OBt(5;csQq2u>o=S&-MnNH{90|L?fZBf6S5XD%WmKsk|fMlo1(-Cr@1-k+Cxd zY6am>#(PV1Zg%ICVgQ^|DoY$9#HAET885$ib4o#KWr8@1@oAA$D$h+$sVwAIdxPLV z=ajPFB(93dIi>PgH>Z@R4|UBsrA$OlDdbt_Ry2wHTb)}Gmm}v^EYhM-*X~s<-3*B) ze~H+!oLkX8Y;kAJQTJSxK5{FTJR`Rvl;D~~lGPGP=x{l+!&OmD=?%OLTrEo^ zVa+!Qt1~(Ileyj?}Tc-lWRKS+jAk zPCpXPhy(+qM(7wJ(y&5zIK*c{Xi?pCh<+2p-!YvxvPfNHqTYPaMJ^cB;a5cbe~Rc+ zhhR-NJuoD}jRqZy249-j2|+a30mUg};Jpc08HIg_L`gm#&i#zs z;jEt$S3x+JeSeTgLuS0`fguU;*&~5nbqYvlN~w0?uchV>B6A{}N1=_8S86pk8~o%H$q?gX2=Y7Zs}xNT&nudq=2x`UA`@D}Q(W3iv9c=9x$tIbQK8^YjiO^w>d-BpdLr<@O0+&CfoytaPhyZMpWr{5xbqs!%h>s`sf z-27h11}lNIt{(^s9i@56w9OJ^B?#QlqL(vj7 zx1a;36bgf5Ppb6_%!1rg;e&@X5P1pk8y@o!Uzv#Iz$Gb^icsx zUbqn{NVzR?k+z$Qe^ig=s{6Wt4@P7uhE&{--kQ%s9upZ=<$y_@pnR^v1j`v!N3#QV z@6IpEjZW}NGKv1;;aqZG$lHa(SD|}Q3iw$wMSyf86N~;x$r0hu?(!GOB~LD-xs(t|s`D*0zhf7Fq&ZUsRoZIi++@pwdV zA2(a@uq358OWnpcw4$Q*FzJj!BE{|X9qGpV?lZW({*m3huWD|OKKme&uS@3b=t&dp za(jK3Dcl7&s9z5c4x2y(Y2Vt(Y#X>1Y-D(sll!TxaV>V+pqkG_c6dtK*aBRpo zm(ppMWy|xxe`lfsc%f3rOAnN8$aW~;Itff(Tca)?G_>m>ytYjCXswBTcd{XRAaVb8v zX9+BHDmK27Gl6?iI5KNBCN!%7(HZHVNzfFhHkA9&f0c@$Rhsvmd5;7&qTW87UOb8` zy4MHcJ~VmSJ#%S6)Z2&tCi)Pvq9mBPI{^Mf?<$Bbjd7QQC$+*cM68*oH|g33Ku~d9q&Ose!@3qJIvt1` zJc$cI?m?&VoyEDPtMO*Xz;NPu^>;VBUrU73kElR5NUsyqV;$AV+UqQlK!z9%f`Dt> z2|#E;qeGGZYIsv_4h(SZJhu~6v?ozTi|DSDf5gIh0=5Z_uv$hG>2JC-tmS^=Eoj{q zn@LJJM&M=VMhM%dn*kJ-%76(b9z>pkfsyQpzA;lx-s) zv|tz?T3_KZ4-d*jNS5AMSe9f1!w2MN7Sh+rLiAb@_?J8fvc7Uu?n9Uyxm$uhX#tj9uezHMfSA!ZE*LX>!l zOnVS@U(gzxb$`!Xk=ZoMCgVYFn<2+Hhu7*+Go6r4>~;w#v|qd_)eka|_@d}9e-F{Z za!RByEo^HbYQHG;&g-JoJ20~I*7XAst8fMmzEcSzulscbc<|Q&o3ew=hgZt8Kffy) z54StGP7r~oLlPZsmLU4LEmdzuOG(gzZei~qh``f}`GpD2ct}MrRRoMT0s!{_$2AL8 zBYU0ITpCz2iTwka1*hT02yN7Je~vm52{cgO zQdj2#?lA0#rTd@>qc`j}S*p7+t1EXY{0r{h-Tp z-Zcd2-WiwM=fNW$cgv7Be?Fi4D)Upgue{^r*7-mV-ud^!@$mNQJ#xL8MvWwcmDf9o!yw0Y@SI3_pIF}-m1vu;)_n-!~OK8t2P zYi7ohnXzK#vtauEkk3OrkJZwj#nStg%|kSQR?2u5%6O~*9{9rJYw7pZKh{Vt{VK#v zUeo{XTrpQ6M&1e6f8DEUziKU$D3z-fV%#|*@yxP9%+KHH^}N%}RC$=GVrHtFnHoEo zsnLR&nj0=E^;-lpm3lByp00$C%(q>^O!fVEJ#;I?eo(vTs}Re1-XXx+fS+Y!{z0Vh z`M9PNBY1&4??B|TA436=!9tCPnd-qzmGzWg%+wb%^~Fqme=$>k1>;BSv01u_GhI#;}=M?BOiK==(|#_U45wA>iQ6OTzj{O|a`7DJPH`@F<$nFEb?tpyGyvS%1d5bNJ5!uY>e?9Jsc!cvZj-m{Oci`e__v>Bf?IVA*F_vaP`>3G84p& zHv6cGX>N=Nc%z;m#+E}=%s|BxxGMO7NCNn1bBg>BXUIsXA=n60-O*xU$wPMC41SgM z6uI8Y&Ei+vaB_u1nA@mC*9l+7n*vTDAe@ta-CTYqY4CD3KQ518L+weqMwoq4lP8k; zQQBv*f01^Loc3>gTO?kq%}jr7PQ6z9&Ao=>W`ytWH~U)cPQO+KMhIkSHvwC1&cIf+ zG)I&(et|`(a@*eBALa-|RoO!vf(SoV1Y>!LHtc2RP4{P#aQ%@J?Ht@UDnNxTM)A9x z^RGHWdf2h*fN(G(d|)SES)BOKMDzC+r~a!#e_<5br)m*7B$2&h6P@q$eAji6sWk^zb*XF-e^ z=>W&N`2c5%gn;5qbh}dmRvmG+aM?}@SXaGS0qfdM3t0E)k%=(h$=bx@x~|MfQCMzn ze?XIyWOXe9?LQ&o>wdCN9yW~xfpx<|xwM`5CJW`c){}MLOq96Dz6p49F3Qt{n5->7 z*(h7S=_qlLB}`UllD2d+Qnuujltj88-Q|p5Y`v)jbDR7Fku$KReUk<@vGA-hGIJ7A zD&M?;CN-JU-jN1i-qeA-n7OJ=S{!Lre-x+~P1%7W5wB)*qdNCZAV>-abEHlr0U}VH zL(qhiMnFzNaKx)tzIgNft$Rs%CM=HVA;NTi@yGjfBQR|ZwQ$lWwOO8?3oQmR6 zQRWYilSFpaD0`1!F0m~hCvPf@jzhb)1(%tLndj_H>FE9;+wka(!4Ht&s#7?o| zOqdmRPefiK*vjohd7)6#1(CSifBpsctoR;@hz;ikB0{vvCumv`6Hc$oj@+Vds?ifV zHJJCep|o18g|piX2Xd&wT} z`Ip*h81_I_gZ?%8X$jE;K-2VdwpS(>hUsttKuP08ZwUSwH{=-|D+~UNoZ)z9T0nMB z>^-8L2_rSGup&<)56=C9e+n%J%0Qg6IxeCx?AnUIrW^|Qjd8Yb+%Qt=K-HWg<{0LO z6W+kPFZYDD_ZBYer)o%X|ZKmdjECo}5{>*XB=m_I8H-y@9I?3TEiLFLFvv%{H zehl`XHPHXe$nW=UF#VkI@BN&nKV(fw8(%F?f#;836>Gtwk=(>rf5`NIYI8On}6-X(*~&smrP zeRV{9H>^CIn@xO)e@=rIfxwKN5jzAvO1zgt1mZuSeM|x{u|c5VQfH6!hSK;J9ki&k zh7mgOej?XsiJ+-<)SLFQPh0~=Qt6{hlvqrHkx^%l-$qOwbXhMM#+uEQZ6p{Y2)(%H z7kK*%9|;r`fnW>&eHS#nd5s-dgKF2g)ak9EdCW}!CLKUdW;}&c0 zy)M4|Vm57>S$?JSzqfq;DSf@?Jo8$wpT2+lzrJNY^1tUmdY>Kn5%`vb@88B;r~QSt zw)UR;DE<7Me)ZSS-~NBs*MIrr^Zr*C`ZEsv&42fI|Ih#V)8G9s|KZ=g{{4UCmeT7# z{`>#=+u!_$f5@-@;~&n~|KD%z^*^0Y{`2|ofBg2J{z8l;@!Q{?>%8ULKdtGzsg3x?C*DR*@4LR<>U-Jh%J<#R1>b+`9^+iBzSo5Cmfou` z^zAvOLwx`Fy3fU_e&QMamgo3Obzsiwvfkgvx%2pI=&xVm;{Vc1=ihqi+WLF1_tNp# zzv|Mz{eA8IKHDp=J8tI3pMPTD)bs5+bG={r8aW>$C(mCEoxFaR7g(6f9~X~5f2WJj zUtfIwo)@p_R~LW3fBsGvufM){{e+AEOV^9%;_>J2^v>PS>h2$Q8vprEZZ-bO*NXA6 zVwC->6{Du#bH#Wr-qP=R@%ihE&)@Unu|(X9*Yvwyy#D&)_4mAZo(_|O+&u1oxy>pyn=)v%)P zOa8I*OaHO+%lg-Ke%W80A9vnoem~umSAJjW{gLYX-mg;9d)IWxr=zq6)&A;tWjalN zdjItFIY%y7il?J1U2f_5tp70Zmw*1%`RC4;N51ak`?o$m{`%$qwf$FoGyeMdFTNVa zja6R!$E|<)@8{-9j=$sk&)>h-4VM1v8;rl78?5!$H~9Sh7r()nSL4^yDyJW&Rqj7d zE0(7}e_D;NS@q}lN?er!#8$>H3oOP9&}U8uQHlaXO#or*CT` z>|M?-MWe6(^i5K2`TS(d*5wCke>wxTBk6yH`o_J4E;st=)}mceQa#-`&nIW~S=kwwA3_(Uk)>DnlZ;w>!x?(+IJ_^-&jv-YyBj|rz zyZM;LW^Kzt5nBHZ(Wk3pUrB(d9;>XUjoT_bNA14r^NZv+UHmw`YTeNKa2x+llFoGY z9JeLMAW4Dezn^>?fhrqjN_cAh4W!P zUzT{TkTi1iv0l)Jkhoi3-;CFWpcj89K__Vw@fVA}cb$1WCmwOS+tLSTSd4ReUE;e% z&aB1GLQD4&v#fgwiRa~C@ti4RoSY|Ir}vpHdABO5q@ow?RzLdbo5j}R*w8w2=9>i8 zq!Guo0G)4HDAOOFDS!UbT-4EZraJ@w+<=d$&1f4EYPp}qt?!W^QQOGWl9_+k6nfkz2{>RY7HUR`(d1KwET$YC2(cl&?6+j zw^Gja0-uoc8>H3CMjhbdl;#S8QjF zmKIZ{Xs-L6lv3_HIXB?hQ_g7oL0~UJF*U z!Hr=tVo~JbE8G4*VcdB|3^cwKL~U>A{`-J z(L(e-4|4Y##C)vE5M`dV!g61$R9I~Q^bto+yQ0r>N%y0;Z;9M@&0bV%dA8zOo~`)2 zBR`E_;3LH1TW7t>BI0>1brsaZx@+i*Ej6U0BBL?VTKz+mGIYLUrIDXmYdmMN+Q_M` zGSv^ov95GAM?`-qhII|=j-1BInqO% zb@j&L%7AWn1#Mc9Ec82|TY2_eL=R|-8WDp5-CB(SeZ~m`x=9IZGCpEJA4vnc^_6J0 z2Xq_L1G{&{iOd}YcZhdBi6mafTs0;MvVKG$bBDE z-1k$<9nj}}+yQ;wk)NjB-SK&oyjD~s;Y>&};ik=<7UNpRTJ=j?GFH!xte>Y|nFKnX%JtE}f5L$ex|Qr}LEMY>GX}zTx3}O41b0ZS1AB_}r_mxf)IG^o=N z5_pgn$Bv-I!vgEP#q-L4u_(jaf{edhTFmU8rNDpbm^MS15l3iP%({HPAEtqDZv3tZ z=Pmfdcu^|^KeEK4$-Lz{IPZcNkhf2J^zgjV3#E0g5aJyi@jNV#uCcc7a@uxV99hS@ z)eu`kxf+K48uAVm_j=aoU#PwH-V4VN&f9Mvinxqy?e6Em_YieI2uJJh(s(4C8(T~s z9g}~ck%O@3{jhhSpp@#5BGbC>p0DKHtnZwkmZ^bs&a!K#>6v9^=w)%XW1zmI0giT? z>Msmbrje~5@&lHYk!Gj{NO4oriGL9F09Ta$fY1eSoF&wceiot{$LPix0*{W^r_Xmp z!5>4jV(0;n#*UyaY6bBZ_47ugpNINOLZ5#raEQl_uvlebg&i^aV@Hf0cElL>TjQ|? zEN%e~VMp-G5x)#O!sq8(KiCnYUONKmeA=)hMhiRQtn~3tCU(S_*by`}U=}oGKBv8Q z1o!24G9EhuQA_3+vEPsW*b$>WcEq`XK%HvX5f-o`5Ve}~EenqwG5TXiAdWcQy_bLH zk;?PlHWbFON{-%GsBB}Q!i_bI(u|GyaL%#^-LQAQjeBF;S{QqJgzkEKW83=U$5m_h zxuLxb$;^b3s}VR;>v2o~)cFtL=>B}z<1uEkfihE0Ps=SJrE>sX^Zde-Yv$@b`@A37 zK2zXiH2z)o6q?x+c;*}?5Rz{lQMZ3Stx%ual-?G%Udo7LdIh?5332P?zKa=n)yz9T zZQKIgdA_h{s(XFbY%pmGLp>h|jf%#<3wUY*Ii9v$a|fxW%4H>p+UjD7b=a?S3UP0a9J`uXu zGku~FsF$wY0vK4QvDz}FyK-!5+IAkV@=KlO@5A5p+Z;dt#DS9fd#X|m=6KKhSDp^g zi*R&={2TlnI+s@6vz%>UISrE6*6PoGJYqLKe@P2|+pn{%zUi6s89IM&;g=BR(Ow|? z`ZT*@+(_(-DTTb6g>& zI$joC1H?l{W8BNf8~=X{viXm%f4}eU_rKqdrG=6h10d~Qo)Pf;@f(IfIp*Y9a_~DW zBd0Zt064Ez80zb-08rA(KHW%7IB<3=BS?nb`RVEyHC!jI z3_hoKaVCzak_D+Q8NP$IVp~fRW?rQ+ccvE$h%|Y*(vD)C1uB2Yq#K6arHtX{=9Kp$ ze>^Ea;bon*<6SG}``Tx{{hF75P4iOZ-lMPTwV!!e)2B?!^9o^pzTSj^;%e^K_q;lS zOq(pALZh1A63w6h#TTZ5U$?q;+FLD{Dg4(03v}d+tHG44*pmxjPOAmUL#k6ozNSrKgBEIhdd5>iXKYM~ zKKT~L`jX9@<2<=_9}zk#%|quMF$ZNINcE&M<<8U4i|>D`KhE1|tYBFVXB`22lcI6v z!307t`pZP2E|(`B&7ujaW%=CkN-yqPi)iEjOCw^$5JwX_bAK(j=vn9FFAb!>z=HRX zF3uY8S*(k^!vStYQD&2wJ!RN8Q*mlE^-yzLgTc^w8^U;GO6&VJwDEu&7(~eXHUccQ zW?}vV>T!R$4)bDNfM7kCK|qq&YF=f0yEtiW4}C}Q59J!LRO74-FY~00+tyH64ZKK> z344aoJkL^RqaR7DxWJY(c&CUM*zHUFtMlKAZ>;#u`9#uL@)i)Qe`8yJvTU9s@s0WX z=4Ua)!m`DqG;GX=KWPKE{PIl>UM&nM{-pyXoEv|~`FonW{TQo;_ez47U`aCVS9nzC zhpE#aODB*}y{D(OnUllH31`rF;Osei81$0DijS)gyUs=!_1AHB-ddJPEl?HdyiVFT z43iRWa_x#75!*ytr+f*l2&g?7AI zDo}qlYxNYU3T?rlYL>bqP&H>fh1G%)MvEe8Xtq`Cu6(VNvkX@sYodZR3HlIFHH(xZ zQ1u8E05!MP0#wafQ+-UE{P~du(=!lRKoMaxA&4{8!f&GQg}WpJgBLEf160j5C^)&s zHlDRY$1hoQb<8hng%FQi@O+;~E)!6d7hr!-HESW3A(-Xg5~ymy??B?xA-h&ix2GT} zfZmDBaHgp=B55=RRkIWi0UFslkGj9l098GFIEHFcLI_Yb54~c%QHW5%TM0hK@}N2D z&@>-Ii+OQt)7l~&ja)*|8#A+I?}V5&3v-nv?7QPbxQ`m!hKqkBWC$!5Aw#V85<-88 zZAzk)zS&|pt@RA24IM*W-O`VOms4q5+{Dlj(;}$_u4F&rLM%zPg?!@(jBi9<_P*mq z*Nl-crO-79FM{IQ>*-Td9V%X>@0OBwQk?e&u*6rW_2pF=w616ikH%hfNi4xFmSUh< zl&Z(0EGeTgBQiH@z;kCRUwH0_XTEIwu}+y4zOoYh|nyC=;TdbuQM=vy7&2o+nR^n$CX^)#y>{ zsbCTR{YWA77E__7I^viJP}+2oMj%t+sCrv=Oxbqeoy&zowJhUJWina1`3Xjus{Fr3 znQ9W@3LJ9n55XuCqAb^QMUURIZ1xzvN5iwjh)1qhxG2-PTnUuv+`_=Sk$HgTbai?H_05>-1s zQ@19HS!^wiX%FtB1yQVaC{f*92Y-v@npXv-NRS(@7&8olClE)}W*GDHE~SW~5*DR132m{E?v!Y99C9p8Pl+Sk zDFL}*;2G`a7>tKwu>BHN_LMN$fGKfqT&S!8ax~D{w2l-Bk2GIy_rbUsB5YD33gj3t4 zXYXytnQm{U1zdWF*5upn^+(Yb!A{}LY%8b?3&*2dJNg&Y-SY(*HT(j9bDH*?buAxq zMdL}M3CFcPWk+jFZfE$4db-;Zl5Yy1_79@HnHF&@cY8Ct7*2oT$Q5yfV+eBOBG{2D z(cVl%c{4k@ku;~gt~c{&eb<|Ly4MyK{fq-o!u4jJjqrLiEz16A%ihe6+YvjZi>>5i z_glSRpQOdDw^HrR{Z{KA-pn4fBi_lFypx&wZTwCo z0hP_|cHDQ;uj_ve$=flaw&NpsaX8D>^=9@uuQ&7DfY+j4V|}ua{JmK1_?Cr-{?^;R z-b}<1r@QwOp06In?EVn5PjAc_-b^GtCj6bgvK6P(fA(hfo$ZKk`8MtiNqFvyN9bC& zHze65h$xp}-_`0;-}9pb9K^`?h~6P8+uXO3*h%qmg2AjkN5| z93pe*H}K7Af-2xCG_oh~NTXk>z_{K_L>#d>`;JHM*PA)=9C2*hn~4y&p6|OD88||m zg#5H|3v}lYLZ)Vg54j9`Ge?1)I;t)5dTm!Eov9d~7^Q_b6Up98i<-6)@IPARn|^qU zro*>sx)#>&@(Acr$6e zsajO`%-TaJ*PD5Mm*ZHnH*<`Ay_sW7yqROzsvLiPhc|O{Yn(%|HQH1M#xR@ekPpX6Na-gdTf9H;T7I!3WK zbBtwg=6Srogf}y<_to)YDyllSFUQGWnd9W>6z_8xd?@c{U;OfCev&`)pX~JgUO#)< z{f2+>QEGqpwLb5%BdHS{-b`r1^(j44Cm2I`>WAb{#}9(e;m0PX~ciK7f2^nwZ_^ibyFXzYL34N0i`#+x?i+b#3Iw_|H#;>M_mgMY}IxAemcr>^zYc=6^UaWIotk&tM zxJ1*6FzIO88Ars}Y;I?KQoUyD%PiU%bCgv(m%CW!QpY2<8O!Rqq0P#jZTQP;h;M(x zfU;~!;Z$O*YW2=i|3(BIN~LMOZP`)$wm9ULCyymu)^1!(v3gp?x4flieE;^}l)w9L zUw`<|KYr~@rt!~zZRPh?|FN(A_kVx*B%P(LOTo!|ypJz+<*!Ly>GfUXiu7vfS1#`_ zZRIbet@QogK5@eRmfd^wcXCyH$_4+eswihwMFsqfXVjuYz?+0OVGm9Ia{-G4eJ#g? zbz(R4@A(MZThmqoq$Mda1X-O?F8e>fV8#eGE5X$VCVmYNi$&X+`U3)jb4q_`OtTNE}3nkfH70F1EeO8}6Iw&4awkCJjB zAEkEVsDN2h@5i^`j0miIQVMysvwWnoquTbIe*oPLALUqdujk;p< z4K4DOtd}o`)>VkEI+tGHR^oqf$ak2vPbBoXv#yL#j880JW zWv+|sWo53P8^$MOXol7r4M(&VRBu}%TCk7xK79n8?d(o7g+4b^4t;LyeDyhP6W+~6 zIO&y^g#Ao0QtVvXTsR_^WOF;?lMifw;f6$>x4wnbq) z=yLmgT5P;&_)^f;if#!;zJodOg9_6va9-}WO8=c7&$*heh z()Zt#N_S?imi>?eOQaYW0xQsq7vn-)@T03w2-3X^-2=k$x zb`gH4b=+H{G{Pb?>*reJX_XRbGvGq({eWyf;T+WToyoYEGtoamlG zIBgNjd6rdsKXZFxIdgL?XReOr%=N}{=3;V@vDws}W6j>r+@4s@+#;6K_izP!KM{4t z+#;3}?rv~IOT=>W$n^@>`w4;D-p`yPmNS=#<>WM%EggTcoVi6T=Xtyj?`IyOq-AGg zlW#r`f~JpM;yhAWUUBCyD+1%ssweEi=^=QB>2jx0q}| z%CpGQ2ed1`F(eAnN>gx7p$(TfCg5FEQ|G8d-!K?21GPtHJh8tbTi(u*K~JefFhZQx zV%)pDw$XofQkHyEJDpAPCS{>VZmP6kn9^q-Lh|>LXB#~kS?w;j<&U%4jYz_F`)QbB z$~kU(siqkw_ktCCL`BFKQyGh1w^`S%Akon zR!%cf`i@j-mj+Y1W3G!Py>LC@9|HiQL-g$EE;@fv*L{RHztkYeu@+Y!leYf|L7PSt zE0|J+ER+(7f{fkMo(k^QmwHB!T7B+p}+ zDO!KkXv-z_%4*^U&PAZ<#&yj?e)g&|VzVkEcB|7>ce^@IbVnbeH!L3&!`E9@*>}$> z+nZKR-nD3hDw|^8vH-I8>|8a))>TvNT^3B0P2Rnr|3X~8`&Wq#jaECD1&`nLk+3X1 zn^;Y;i`8^Fv+3E#ENTv>*va^15XaXun|*&-TPQ1bv$z$F=Vw2opEx#SN3+0|hIkXd zJ)0UY!KvS3TYFv=w9xvn2Q6xauE8Fx?O_kr7VN=#IfnIv16VIFu(n_i*7mRmIUC37 z{Z{>fJy^?S424bn;wwwrSC+QlBTL)ANyPuGt-lHK|7X7UVE%vdd!NSRNb_Bnn4f=>M(4|m z|F3)W?Ma1el!#Q6-fXtg(H{9Mp5=d+9L$= zqia)UemV|*_-e|vZU8^M9ObRm*Dl_pzH8@9_K*8UKesQ(r}ib~0)RW>ndHDd8;r9~ z*GMH!K?41&hZ^xv5^!lK0%d<<4#~5xu#0KSriEv}5mFraS|c@pJ^M}oAdti3T*j@( zyGHE5d)@Ku<5hA{Sa|ctb?|Ez;ML#csT2r?k}p42JQdDLu@rx5qL@r~dA&=F zOjNpDP_W$e<*5OpZ5DJhgzd;lJ5>6@E$&huI(u=S+y5_Y0Ezaf5^zB9TN{k64o>cu66pqyMeAUocDc z7yrsb=mq5n#Z2TD%efQyTUNz0t3KX}L@$9;-hVCWF8es~BsoaQ?YQcW!un&mDeO>cc#_yT@ zupjazcAEzz#iV~gWm}J9JbCBoL3)@lCm>J{)M*3L)vTXtN`$h@_;AJ(dxnWz3l+w- zPAn#S0o%(Cax2PDA~dt)h>k@PTp-g?9^^f)PkGum4WXL|JeUiHIM++&u<{sfq%8K2 zTney&oP9$Hb^!HXFa(7_v8#{+RY!yi2{ThWjWG*2)=+=&S?_hQJkzDti@4@~*SfDw z%b6_HM9{m4^;+EDHk5!jZ=ytYMFFQdM#Ak?e4B{n$Zc@qAQ*giUu z>RK7q7^KK%((8NTo~VP(J%JS9SlM}5^nG0(RBz_cQ_M>7>+>?`+ctq-oU`u0<=~#^ z_RT#JNFsmki7FnrNY$mzw{)5FEFRejNho$yE^!|0JNPF;h6F-?|0XkiL*X0HZ4}ig zhK+9CIc_(9C#6@S(b!XPMiTGrMbO}R?~{$iiEj;iFY^DMR=$d^*P>WU<)U?&kV4xS zCrD=+$(z(Dh_z#ADtSQLgAfdo%_@qo5!omGT@HVW;9zt`A)Uuo zrW)Zdi|gQ)J-uGkf!aYDiFlO^g}ccW6NzL_TGlA>WR;RhuX!Acgy@|rq=&8|1<$Luas=pBVUv0tj|H7rmAkeJ+fUU(-8S)x zbJ85+)&7c{v^?-brB`|L`Bgefi#il0UFLsuvwxn0n=1E%>rP2k)S{M^YQU+i$O2=Y zn>5j(bzadv>>grax)@glY-LBe&ROjmI7-fYhS}Q4rKHT;L8=s%CGjlKqI^;gs*}Qu zBuG*@sTJi=sjTulopH?fw2J5?v7l(Hk%qfNb5s^?6w9UfZKYe9(<*VEbzR+9Y*l}b zQD#QpQeKh~4sKZ|J|F9)FDMWE6_s&rg*aKGa@$(8p!Fs80`iCHL3Q$Geq9!mLjsy< zeORW+S)HV22`H9zzu;=-t4wj?2K2PDq0eyPp0lAspMys zDtVh$SKvs>5vcly_#-PxyX0C;CWj%^8g;A&bG1}w|By4Z#&j&2-W=1F>lJi2-sWYc zw1y{B;B}+GR|3Mc;KW1tT{CIlHp?wHIqce%qXNgYtQ~CvGNZXT%ig)<85Do&O|`i@ zjTTc%j#bTDbt~oS;&-CI*X8N0+iGqnl>>@H@q&jp;gL>p~^)Ly9&{N4G6cpQ4SYTARLR||X?zQ^t(WHWF({mF!PU1idVI-{qt zT~OKbi%We4B?fq9dELh~V!_P8o z|FsmL6n2Rh;R?+T&aw0ypcuT(l^iVl`iRBVLR+kpx=)PqNi{S9M5GjKA+VfUgbOp{ zJ?Boi8tg-8%8sb#^JMp#BJo;ifmO*%m~{;UpG%4>r5&!yNGnj%^gxxn1Xfii4N=2%?XjNw(W35SvaEsm zHtYd^IB?Qch1+OuRSj~uQ&qt~K+D5{M3T1E&^q&F&{uydp}LZ-8SGUiN%iX^Jr{I= z9BZFyqQ;#u+&Cu9Aax7TQ;41i6`P=Lwgv;MJ4ep6*-2kW;xdm{!m{de#78AU-zTiu zN2CYAqwikvezrT@mSeKxH;;JT+1XLV}>)FKPM($y^Rj~q6r zq98Crn>c?|g|dL;C4@!cf{)T8T-)1%Em4c4o@rk6&JZt`!%rX_ep+!am^*WZJ=$b2 z6jz3xM&}T+?>6I1Z)D(w;9P6ydrcLKIR@MzVvPaCSA=QAMc z=6!t%C@6$qGuZYhgS*H^IFQ+cGug}*3@E6EaO=bV?ER?Mb+6h$fSi%Y z*++kB3fLfB!?>DDyKL41)W4MN!Yj@slyfax!`a5LbP=qlbWLF#7*3BULs$iwX1aP8+{cibHRx#B_gl5?K_SL^(2nt$2sG%DW48Ew|mN)V#a( z?a`u`1;4`{T9aVAHjKRbC2F?$EmE?J*9aw!_bBsNFOou!x*y4o%q78kA;MTM5>hRT z0+JXS6f|92({3=Gce>Ey)KfQVJUV2FbwG*^Pr!1suf-{wLXx&7;VocNm101rk8ytl zsLGiDK>YVK4hkTWcRJP&pv@(O*b)|@M=xdFqwl5bSl<;*2rW$^zEio~mBIb|p!((M z#p_vQJq;Qc+|mh@Bz`!0>5d+aByBb}4v4Y{Kq-v}P--zTQfLilSG)UTPN1e>{MkMM z3Em$q3i(Q;e(f)5ae5JZkBcT1a0Y*nrsfc2!(7X}-z-N?#$whsWbYr2z|OqS;gBff z@ln(n$A};77Vb=DCY3;yv?SFZ@F;v%_@qRLnoM22b+@LoQ^il5T&JJWs&BtVb!wU{ zqNo+8>6e!qyxa%}%i+Axh}14mgak2~H)u2VbjsEt5xg}JgkMVipjqAQ)Fm2 z?3o2b7Xm@DVNZnSlaPH?ziki1c~yMQ$1$?m3`NEF$5hH1=#V)wYLjSKX9bm7M*?m3 z3HgkO5)qan2JyKQcVQ6q>gL@JA9paGH z3eLBUVw`^I_a}sMJA7UioQgI}6q}at1cB~C(h&&Sv{(pyS4bBKnMercI4ro{+D~Sn zcVPH(eu*~2cS=w*^2jdIKzw>ctP^~qz2FT$XuEuqtW1r9@`S9(I|6_8UR%N37MkHJ zbhJVy107S8Dy=$5I+AvEJS=kY-w1q31E>EP!rejrDD_zj-RQo#wGs1Tz$-g_+>`M& zoDmC_z9!$BMq)_HM;H>@KC*mEp? z>$cdO)3b;=B64!EpA5yU0t~@eGah8Yc$CF?kMUtgl>v>_uHb(>^;%FLN!CY=;OL@r z2ePYP;#d!OYiM6s^q^8lWS)Uag_*?t%JNxE@d$}9pL~xNXcjZ19C0MbgQJX(?FgK= zY)zB`?-^`R3ERa&VdE?d`->ivlgm42k?%@7=MlUo6;hmP`cfc$ z>p}X~2hvv_q_6%F(w85kuLaV#0I1Ynq;HW=Mu;ETNK}7zzyO{QZ}v{WL`Y~vk;B@I zaB_W5A}p3R0_ZE32T-s3u4?K6Eu{~fAi-S2=}Izwy_pqzXCbAoO(1f$398;$&ntxH zAt6y_&<@T)+5P%@IUmRH|Be9HADnTB0G^Yg;ikd|tw10n@a6O@{trtrXA&Z2=OjGfh#2s2 zj4-2M1moH)lgC+nu}H~K$GkZi$kn4`q&AS99DRT97tuVyz6-G$FT*xkg#eI5F>$C) zhkPr=#vzN91j~l!M9k0;6+$g;w?C1xJK}i?H74j~W-k>)q$Qd`q00&2X6LJYMNbGv zX58N!1xe|oJ_IEe2w;>r_AMe?1VO>mABIIt{SZ7dy4x9_Fol~s$#pUIMTy;Apu`PM zVsd|M3s5D3QDVf2ooIi3)1DkC6HG=UPkze{YoHh`nh_pTAS%dm(0&_e4Qvk#OZgGh z2Tibrrm}_ZlN6hQ4eswM#?Ee{ipVw5?6}7dp8_Rr#N3e~JK61Bl`}n1V&=3_RH5#m z#Hek&wJm zHj2?0np83pAH%X67K{>Gv=@u|J$mkBl(#N<2;jq;$xebr0sBW%&A3H(~aCIhumn(T1LHkl>y9Aicg573nm z?CG<(I9BoW@pXIp&S^)ATvIi7HmJ!=)*@gxJMIQ!4tK) zF7nZ&lY8lrN{(Tk@bEQRY+ibXlYcd=sc2_>LawCK7M318O)Nc1rfyH)5tn1mZ~|!! zMDWQGiN*=4P@C;pdS;2dHG`{UKjz(mn_N%!^qn7>uyep&)<*>xdS^76JRt7P37P3o?^}p=zLM^8)%^A;;uN zQn48EL~EqX>DgtXbkD+Xvc-Syx-^tdA(*CPo+|DO|B0?=&&|2aIroY-+n*s3h-VWK ztLyU89P6c(N1I#f-W@yD-Ptj_i)E>ALS=RbZ$|)jIgf=eoZglXKmi9*Uk9k9jC`F# zUhFs37|MSBe)S13*ysplolpph)+OgbJOgEsA90XA%7ST{#2S&aMe2XQx&vL10n$}; zvP`o$WtS;&AuG57lI-x~wVME`+rjrCsJC5%k+Lcrqk0LKCr4hSXzX}}D0i|Rk-m*P z)}qoy(`*h+RIbZLF}m<}_NcqV#fqm_$L8#G)pa-ZLPVlE7G%7H@X@vM$9HlHs7J_b zfLYzOsTS6ht`RKK(CL5Q4IkVBn-QXhzZ{|p+Cx%T3RUm1`>*;UMI?pgA9+)|wLHDY z=HOoY`ckNPAZmD1y!yN5*LIOH>Nsl47y&=@=8Zw=Hg!c2tbBhDdOD6kZ0NLJPR$f7 z_*kSq4JbSine!c8=p2d4R5D9a6x@>Fh^}+9qsAB;_S}K<9?N9M7@O;kvDssc&B7Ry zMGJE?#@KMmcoKVd8rK0jf(}W;Py*U&fMJTCfdxCeb1_Ukb=htK7m$V=LEd0R9`_I% z;q6rPn~aej9x#8cHQBY!#Pq;3odZ*FhEOy=E#aUP9YBHBPIpNtG-1b#$5-D%y8-V^ z1WesgEo;L82BHnR?_~*W#YmAvKn;D*-|*2-U+)}HkC+nrTgzm&zsFf#)7e6t<&_-! z`N?PN$)2R$4C(@O@!*5nwWt3ctiNh_6B~H3@I5kIlKX$81cCZiA0MJZ(J{+XA-!u7ZPmUZJxvgAE4h7a^{-(p&Alqe&jH>v!Dm+UFuoPL5F1{-eTAGMQi4VMfmLcM=mxgDrSDT z$gq-8YIuK%`H_Jq?SW{(h~gPnZ|n%(iY*dCoMJuv$ljQtTWnlr*xsUm?_hpV8o_*6 zs>`MjvdihRvC9ipBUxt9d+}8{lV&C*?ld_%m>*dIgJynY72zVtIWmKUThJbVj2LQ! z?h>_$^G&yOHk4bhILw1=+LSyVlX040iVLA6(quf)}mXQ$@RPq z-X>N@8X;wCBFGJwn;=YBA7bO^7G{pzu~~!% zj*S^i7=#tv|3z+ChPclk*R(DrptHiU*Wy&25R45FI~KOL2vBgq2X7`ahVj~J)E0j% z8RvN!-&F6Db`Qa^%O8hV6!(u@tQ-S6Z@Bmtufd_$9j_iC@!4@EMTlg4FFye% zMT)Y0(nWqR`=m=jPu7ok-Wo!6@-%;m|{?22P}d+tMwcV}2F;z7~5QZ@Wt*@Mb0pgrfUIAz|O0|18esg`b@Bk6we{m}io zZihJHGTK@UoJv(#_Z4AdSgbHC3gLswmLm|vY3D$xWq(uxGb&zJ8zRf08svXFRzf+# zQCxai?jtx^HHVLgD2NrTteOHWpl(Bjmqn3k_6HTQH8Tr$PA0b;c_{4tbGI*Q4hY_d zs6=ZMNz&$%4cDiveCz*~HE&#+osy&+oQ~SG;FuD`$%SV)l=>40vx4APV3%po;E@T& z-M(>YMG=nS3?*roouSisxI%v&OuZ~HuAO&b#%fYXC7G%Pv$iaXItYeMUP0!wv&+?D zf0hFv^r1XF&8@4tOF|>C9&v8v0#3NeIJRS0)C8l#F1Ovx;VL4T$O0tXa=10GM3N3{ z`^u!J+E=E$ejE|1jMub=uS`Cg@M-Gx*rzF-2d}Bwc;{y$96lN}92tM8z_j*hN}?Vq z7P%@?-4V!eR9uO#{_D|Y_^am6X(s59v-aCJYTt_+o10X=eEAT-m@ z5&J#Hw^HPGh^%>X)lz7I@PmBIcuPm(5a5m5$Y`f!@I?s58@cHN`mVBT-`D_vzM-MJ zQ(G;!Q{@_Pr&foQRat+ja>ptb+K%GtaOGN^j+1uACxyEb?#^C=DG>g2gPog zE2>9p#e!9Fi^5m#WUa7sB8q{kZNqs8rHWVWUslCyuaVuPC$@jz5M0r`;}~`Ibf=sz zPZoNay&OX)rB=$ms(2Mm!>TwD6Y%e(U{-sGODq6&Y!`E}lb7B6qe}RkEskx+(i}mp0cFmId6IwrO(~x?|M0X>;dW;bdSV z@#WDuIi?tD%CXeTn>!}tyk5VZFS+w550~5-jz=xEL}SRIxZ7pMryxl;qvF+qQ5lO` z-Z8f2+}tH%*sAs~!$a8nGPuWfUN2c%K6keRPV22H(&T@Cq*z{>#d+!|vzHwt(BTC{ z-~c|ty>B|ef^~f@WM;hL-C ziH5lVUHIi{xaI2Om8;>DtKpNYk4vsT9=U3VT(v*0UUyu@8&~a&tJfD-djzG}6Zbgc zKKyWx8!msmaCXAg>w~Ly!BuUF?X``@bFZ}r3b_ThYcd~etF_ThO8$J_P0{V@4r zl?$g^{k6~Sy4DTI$J5rYqwT}b_TgshDPFKhX@0>3 zT=RlQF*98ewP2m~vX&vS=m;SUEwZ-VWDVM^aIk-B`_k0VUCeN0N2sY%X)1(ugTN|- z0~M)7B*l%<1A(2(-jWr>Z89%#4?BSVT^#8|s(a?Tlimx}beNNDchAZ-?jT!AVz0rX zS4CqKkTgePu?gf~G02poUnpQ>ke%H%MwyA?rb?$2<7`jH**4Ix*epYTl> z&|iNQDgL#|#DAJ%5LV?dgCLzixdlFwbYh2K0KtoBl+V;yop+(V>Ga_*>zR2k=X*mY+C;VdS{PDK%aK1|U~DyYRkK%z%BxX$OXlo@&(42# zmBKu8D1}+3cBdp0N?}&Si=n#?(aK<{@Y$ZOQfN`gVNX^Hv$!3B&%$08K0B(FLQ0GU zSST0}e0DA?g$RMqB6ia92km}76uB&0AAGhKD}`D7tH5W8OG5|z)u zVtczvVHW1WF*#fG(&gE>_-rp%Da?N=JnSTt!%89C`M_uOQn;S57O>bcu2Kl<$T8)X z0H2i*_$;Dq1QbBwv!no+!Opi*nB|!UpY8cWDJ-(QjL#NU1K>GQ8Ep&EWNo0l3inf0OK8q0e zEHe14vIqqd+v>xBmu6)KdCO(C68LNpM~2=E$w}~$2>=ybY$SF&T7dz(cpCtpXwe|Kk^-MaCi!qRMC>D&*A_~eYX&Z|r!A;<^ViSYULW&_$QNoJ4aMSoK`yn+%;+oEXI7{+P z@LAm8#ugj7^W>CNhz!9C5)95ELe*26278?5#TEgUo%*86UKL?~lOMalqjrFSyes}r zii|Ts@b~Y?uOZTlnigqSW#vXv*3lkm-UJ>pw|SkURo4$uF5UKk28-i07BCD@;dWLL zW$baTSgrb`*^-JUc3aU}JxnKmnB>qM#6c)IPj30-cE$D71{932FtfO>XLgeUtklZjQHv<8)@dw5iB z@h)t!oT$4KUh{Hjfb%uvNY80RD`E|lvqnP&932+EYn~Km)!<9B0U+|rLZ9F^K=A23P+VLC3gX&P_IF*U3jq*5*6+9K^{Z5WyI7p87NNBBxclLz)3{o<~}YdC~^*Rhb~ce^UxPOI%~ z#yHrNoT4FnbV`vJRg#?!f^%Td)a^3Ly+yWV*)ccWvIZyVQOI?(EJVg-;AWg1;b5xD zP#>K*m**Ln9|e^|2mA(ydB9C~;tgRAAu5Boiz>^>M^$C{a-A5YmRov7R1uEB?NC4M zc|mPNt?XpVsDyLp^(PWB|9Lo19q%oDMk0lizCR~_!g>Cxah{z^l*qA|B0e4#Y273r z5!4d3#OfqR0jS9if^2n3@u!KB^dXP)!oQIlGJEuw!cljK|TS15Cb_t!TAKi1n zasA|fDmrHlyu;zC`8_H8`$v@L2axACV?4v9UyJhLQ}1=$((l70Wu~*{t^ci&Gl??b zIlJilh??)W*sFvnCFCSeyaxA7BbB4u=9(!gHv|V)KhF@%28y}3omnR&*iN+b+h~fA zAyXKJ7L3lFcc^CD0wvP8pAoIg)VfHmpBxQ;$$V7m!j1T)ly6IdMI^QCye+i8Z|H(f znN)+VSCd)P-Cgw^K{@d>$YtjxjA-}mq?n#+t@vN=XEIq|)AuSUguW&Ag=cYws*nV> z2!GTTwq(4aIIrPNZBc1u(ONf>#Az0DU4m;NFIUkes|GbqsEN0#?r=31jMuEd+Dr_8 zikx^h8dt+gVaO+!lihb1 ziuN5ZS71kEBZ7sEkjJg+mY*`_S$WETSZQ50#MMKx8{78ED^P6rzqvxL?4D#41irVx zicAvE0pQV{^^2IZevh*8$$YeA{oFF?4;8o!#vyADNuQIljC?tyH_$gN1@44Q^rKQ8 z6s&9te_m`PewK5>nL;rue*4Ry(VCqfgHli@;1g`!8vP>^DWr%$KSkG`sKlw>T2 z6k;{%Yy()W=bhL3h|NchB3&UKwnXvBT-pwy*88wIkNdEc4ko00jvs_{SG5-Pn~@01 zbM9^_wk%9o#B9@+G-`bKVimmUngi zW>RXpelvl3mE5b<^_!`d)^A4C z%mi_k;?tsjb5=?f9TD&3UX_zuCiwW6kxOEkylh>{+di-$TUy ztyGtG$WeE6jJzluYxi4!Eq;mNq1Z>F&K0Z`M5TZDTWkDucwG6vdH98Q<+0 zOZ>WHZaOPK6(?U7VcIVCNqR%4#ICBwWq}CoC~}+Pn8v2&nd;~`(PvEomzV8` zQ7pcaJI(*MMM#r_Pmv8>#izkATbr3Cd!;R zmo}XvQ98VYO`<{NGPjwWG+wL1Qfbi;xsB{2x`^!ZdtGi~f}apqRk3P?E|K8JT3yrh zz;ewXeNEnfnrNI{OcKS5;19g`f#3I@p^ze34fa84ntRwZs=JoZgb6sZDoj+-aCB4s z5UdSb>~XgiW(|e-ta$YrLOX@+5NG~4@jdVB_JBRw+JQ)UY!PRx98Cz%CEA=VF40zt zs^jF>dXqO(s;t_xFTB}WRND1(L4pca&=eW`nD<427Iuo+5ja$(K}MW zdZNglvu$N^KLL^J6Di2cxw5!wSA8cPQGo@w%f-#Q3UB-Ny9{C*q;>ru*yt$L@1U0) zhDqM@S}lIxwoy}joWhhrh?5}d#5vvzzjQ}-jE2Oug9(1JvsAL!;?ryqgJ>T1&GmiBuG;)6OVKX71Ej&vIH;<-j z%W!cd5@AvyKu!#4fKHXE3ZQp%^zn$(M;Q$t!b*39khgBl5<(x*)8vzLPq!HsBE$0$ zd=i=PX~s^9xKv%jx^I^r!$RLF*>jFe+W@qG-YO8O!LC(a>nlI=;@exOhMbyj-OB}h zAZJfCfFRyWQvx&Zg`CDqW>)|v#{}i`Eht!Mzlh|X({2OiD@ZD`hp|O-a=vme$km0z zSD|}IDy~s8O@MSGQ}^vy$raF&aFA-NBIP#Lt8gIj$w@0OsOsJ9QNCy#1)?DZkI4=tYio=%@Aq2509Gtq~L z6(_;W-Noi6epf+kX^cA^JgAk9A#%+;y-C+@3V+hIsOQ{Ga`KUxW{Sx=NuEb>zU72o z2nAn=I~c}^MS+*TYu!rr+BBqp;p6S7E<5yG`%_C!)i&R$MxT3r|6{)7pMj*?*P&FI z8bc9bzRIPTL_%#LYciO~2?e|3C2iIf0+jPAYJ#Px(KE>lp+;~X8Ct;wS~cG692jV( z^mn(qUrUs(i>yGmAdWOw=cfYEpdE(%CZs_SaJ_dD5Zch_NYt7hUX+`E69ZhH=XNp3 z_7sI|QAvfCTsTj{HlY!A%SfWsYS;64c;(l=14446Ri zAOnM3)4kmEuLbHU2_ozTss;mrWn`Q|SvxWj8Njk;yjGilOC%DrctUlMsBNQ43^|!0}fv6ULS?fqL1JL;F4Btb; z%?t@D8UwA8C^NCE+Zk)HOAr~TZFe8@L15bEeJs~>_pxm4#UN`KOxYSt2wsCN_PARM z?XJNTSMnN+=sO6#=sEE{@9Xw}DQ9|Hb|^>$Ag00#Lfsd%hO<7S{!Ft|G>en*Ah*qs z6GVqU|AyI zAjB-3frC>j1R<~cbR>B2(*c{Zg3U))%CkS8D;p0#cXFK&0#8RIJKQWG^l|TWL~$iK zElb=f-Tw%Irx)uB6P)pg%3i7r7y{O)_2T?lnRy6bhcY^Uq>iPJ5#c|GhKNvZvb*!U z9LFfy*UUb;6?C8)oI{0;v1mK*kx-VoMUJZsE&>AWG*hH&i)>NdCHuICE0n1GOcp?8 zs0x#$)cIgxr5hUJk?R#o4)&oSOMOLEjhbT_s8$YASLXxcwPK5i%uk+Y{wl~V|wl~Ul0cxeAi>3w?XyjI()Aq18%A-dk+1@}TQ>3`v7+9%XdxKn7 z6qiZO_C`g&F@7hOi>s|=+Zz_VJt0zoIoaL-6Saa>cIa9zm7}E}6TmDhM5D`?6a{S{ zCe}45NKo~l%VPz)Ji;8YV04)tozasP_Jc0ZdFK#+qI+jtZJ!g5INfc--uQg&tISW) zzVeRkL*t_9XCry&4D0Tdp(K)DZf>mzv|G|TOqVh)QZUlR2@Ip_jb4&!_WI;t;b~eju6~~cKHi6&Pqa0bhdGCJL8jkg%wm zM?%?uN+?>~T+E}zG+NA}btlpC99o`2>&~EgC(wN6&y4A_V)pE3^6Z#9JEqQlX3ma@ z133V1UV0{u#Z7ihPn`YCn-$Y$#jKgnq?ym0nK5N%%$WI1n7%*k^Dxh2w)AJR^nPvg zFwLKtGM=m-^pbm-RZtsPX7y_gl4Jx0Xem%GC}r zemSDZ>#{@4=cn9M|8R&=jxdy|Vr8mZnHoEksSzdhlw(<1RO+`0Wh(VhqC8wlADQoc zg)-Im6ZO#T5c|RHp07hJ3(2`2>k%8UIOZQhikOc}ofx4DSw%Iz%BO#oPyZ^P{#8Ext9<%b`Sh>y>0jm3 zzsje7l~4aFpZ--o9XjAw`Sh>y>0jm3Uq5}7PyZ^P{#8Ext9<%b`Sh>y>0jm3zsje7 zl~4aFpZ+Jyr*}EUlrQMJkns|D$I#4wkvN1^4O0+%6k#N2!j%^XC`^d3O!O1M)6ho7 zO=UY|k2*y}cAdqMsh>!UL>eiQ&2$-@2Cw=W;}M&)%=E>N6vi@YWSQyn$uiR?`E45F zxYZZ?+^)XZ7rI^v0Z7ZDq9=$x)-_b>%8*tUs^;uNU1^~-m8NsY^nv~IBEqJBIcHGU znh3hpm92{GL!Gj%I=wTP%!p9wlOl`fV*Xf##O$ykcNr&9f+(MOIn@!*;+Hy|Jc^cN zL`cdPq4if|65(@ zLs}s9{Ln4Am_Zqdc zhK4WcT}z-7ir7d|>Rjs-IbSP9B^~yW?V=hLcC>;r2;d~HK#F#hH!rSZKQNiM+Zt&% zhDw}^b*`G`#j1clShwoAMv7+2uu?Fj`&lU%k*!x!2U!~m7VYHI+)?a*T%j8r(R7T~ z;YHPj_J))@vTaI>$^M9XZcJF^Dj__gM#?h_adL@rme*V7yv`>tYK>~>+yV*b^q5yE z7*sMuDHw7?NgR6fQVJ}f6j)TsG1+p1H?UeL1q1Rt>7EolA{!NfHQ%D1OiMNAzt{8) z%y9=sTOY2{e4VAEcjvf&9N5ofml{_eAF@|{Y^6vBysmvqtN{B^z=u*o#67L>QH@18yl%n!vPH!O zF3l{B7p;Cqu!bgg3|1+fQ;|{>6LnjilxQ+o;ob}XPj|2?P>Umf!-G{M1rJuKA2C=- z`O9Ec>KhH#Mf{$Ab_eSk9;}2dcDEj^lE@Br79}#p^<@NW?9u|k8cHYJSjns7FDx{F zPvRgEnIWWeAJ==&g`F~RI|bg53L0;d%8*?zL|Tr9q7nxh!5X7q<8>%qm=LU?H5?Z( zLi0(6jn^>(rXHhz-o2jlIftRm+f7ZO~#RoV}q9aw={i3+XIPV zs!zNF!J1;48o`=Z@Es^?4*xiq2dDMD?2WT>Cl(|TTnd@Q7F*Y=_N zoFXC_!J0CpgcaoFK8>IBkR$t-K#iMi*eJ714_u$ zWo|U*7S{xFDxlsYv$*(3_Tg(aCUZ9a`hFewEr&;+oZ@v{@BMHZDO{bx&-cwIzHg3h z%GV-)$!>9eZ$kYl#BBji82ZD=MP$&sgiQKdBve|W`~9dI>9cB;7Vog*=_=64AcQZz;s;3!zg$Jb5^g&{C($, $>=$, \texttt {max} and \texttt {min} are now unlocked for the type \texttt {ClockTime} after the implementation. Typeclasses can be viewed as a third dimension in a type.}}{12}{figure.caption.13}% -\contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above is common.}}{13}{figure.caption.14}% -\contentsline {figure}{\numberline {2.10}{\ignorespaces The initial value is used as a starting point for the procedure. The algorithm continues until the time of interest is reached in the unknown function. Due to its large time step, the final answer is really far-off from the expected result.}}{15}{figure.caption.15}% -\contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textbf {function}, in which providing a floating point value as time returns another value as outcome.}}{15}{figure.caption.16}% -\contentsline {figure}{\numberline {2.12}{\ignorespaces The \texttt {Parameters} type represents a given moment in time, carrying over all the necessary information to execute a solver step until the time limit is reached. Some useful typeclasses are being derived to these types, given that Haskell is capable of inferring the implementation of typeclasses in simple cases.}}{16}{figure.caption.17}% -\contentsline {figure}{\numberline {2.13}{\ignorespaces The \texttt {CT} type is a function of from time related information to an arbitrary potentially effectful outcome value.}}{17}{figure.caption.18}% -\contentsline {figure}{\numberline {2.14}{\ignorespaces The \texttt {CT} type can leverage monad transformers in Haskell via \texttt {Reader} in combination with \texttt {IO}.}}{17}{figure.caption.19}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {3.1}{\ignorespaces Given a parametric record \texttt {ps} and a dynamic value \texttt {da}, the \textit {fmap} functor of the \texttt {CT} type applies the former to the latter. Because the final result is wrapped inside the \texttt {IO} shell, a second \textit {fmap} is necessary.}}{19}{figure.caption.20}% -\contentsline {figure}{\numberline {3.2}{\ignorespaces With the \texttt {Applicative} typeclass, it is possible to cope with functions inside the \texttt {CT} type. Again, the \textit {fmap} from \texttt {IO} is being used in the implementation.}}{20}{figure.caption.21}% -\contentsline {figure}{\numberline {3.3}{\ignorespaces The $>>=$ operator used in the implementation is the \textit {bind} from the \texttt {IO} shell. This indicates that when dealing with monads within monads, it is frequent to use the implementation of the internal members.}}{21}{figure.caption.22}% -\contentsline {figure}{\numberline {3.4}{\ignorespaces The typeclass \texttt {MonadIO} transforms a given value wrapped in \texttt {IO} into a different monad. In this case, the parameter \texttt {m} of the function is the output of the \texttt {CT} type.}}{21}{figure.caption.23}% -\contentsline {figure}{\numberline {3.5}{\ignorespaces The ability of lifting numerical values to the \texttt {CT} type resembles three FF-GPAC analog circuits: \texttt {Constant}, \texttt {Adder} and \texttt {Multiplier}.}}{22}{figure.caption.24}% -\contentsline {figure}{\numberline {3.6}{\ignorespaces State Machines are a common abstraction in computer science due to its easy mapping between function calls and states. Memory regions and peripherals are embedded with the idea of a state, not only pure functions. Further, side effects can even act as the trigger to move from one state to another, meaning that executing a simple function can do more than return a value. Its internal guts can significantly modify the state machine.}}{23}{figure.caption.25}% -\contentsline {figure}{\numberline {3.7}{\ignorespaces The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt {CT} and \texttt {Integrator} types match the four basic units.}}{28}{figure.caption.26}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {4.1}{\ignorespaces The integrator functions are essential to create and interconnect combinational and feedback-dependent circuits.}}{32}{figure.caption.27}% -\contentsline {figure}{\numberline {4.2}{\ignorespaces The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description.}}{32}{figure.caption.28}% -\contentsline {figure}{\numberline {4.3}{\ignorespaces Because the list implements the \texttt {Traversable} typeclass, it allows this type to use the \textit {traverse} and \textit {sequence} functions, in which both are related to changing the internal behaviour of the nested structures.}}{33}{figure.caption.29}% -\contentsline {figure}{\numberline {4.4}{\ignorespaces A \textbf {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{33}{figure.caption.30}% -\contentsline {figure}{\numberline {4.5}{\ignorespaces When building a model for simulation, the above pipeline is always used, from both points of view. The operations with meaning, i.e., the ones in the \texttt {Semantics} pipeline, are mapped to executable operations in the \texttt {Operational} pipeline, and vice-versa.}}{34}{figure.caption.31}% -\contentsline {figure}{\numberline {4.6}{\ignorespaces Using only FF-GPAC's basic units and their composition rules, it's possible to model the Lorenz Attractor example.}}{37}{figure.caption.32}% -\contentsline {figure}{\numberline {4.7}{\ignorespaces After \textit {createInteg}, this record is the final image of the integrator. The function \textit {initialize} gives us protecting against wrong records of the type \texttt {Parameters}, assuring it begins from the first iteration, i.e., $t_0$.}}{38}{figure.caption.33}% -\contentsline {figure}{\numberline {4.8}{\ignorespaces After \textit {readInteg}, the final floating point values is obtained by reading from memory a computation and passing to it the received parameters record. The result of this application, $v$, is the returned value.}}{39}{figure.caption.34}% -\contentsline {figure}{\numberline {4.9}{\ignorespaces The \textit {updateInteg} function only does side effects, meaning that only affects memory. The internal variable \texttt {c} is a pointer to the computation \textit {itself}, i.e., the computation being created references this exact procedure.}}{39}{figure.caption.35}% -\contentsline {figure}{\numberline {4.10}{\ignorespaces After setting up the environment, this is the final depiction of an independent variable. The reader $x$ reads the values computed by the procedure stored in memory, a second-order Runge-Kutta method in this case.}}{40}{figure.caption.36}% -\contentsline {figure}{\numberline {4.11}{\ignorespaces The Lorenz's Attractor example has a very famous butterfly shape from certain angles and constant values in the graph generated by the solution of the differential equations..}}{41}{figure.caption.37}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {5.1}{\ignorespaces During simulation, functions change the time domain to the one that better fits certain entities, such as the \texttt {Solver} and the driver. The image is heavily inspired by a figure in~\cite {Edil2017}.}}{42}{figure.caption.38}% -\contentsline {figure}{\numberline {5.2}{\ignorespaces Updated auxiliary types for the \texttt {Parameters} type.}}{44}{figure.caption.39}% -\contentsline {figure}{\numberline {5.3}{\ignorespaces Linear interpolation is being used to transition us back to the continuous domain..}}{47}{figure.caption.40}% -\contentsline {figure}{\numberline {5.4}{\ignorespaces The new \textit {updateInteg} function add linear interpolation to the pipeline when receiving a parametric record.}}{48}{figure.caption.41}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {6.1}{\ignorespaces With just a few iterations, the exponential behaviour of the implementation is already noticeable.}}{50}{figure.caption.43}% -\contentsline {figure}{\numberline {6.2}{\ignorespaces The new \textit {createInteg} function relies on interpolation composed with memoization. Also, this combination \textbf {produces} results from the computation located in a different memory region, the one pointed by the \texttt {computation} pointer in the integrator.}}{56}{figure.caption.45}% -\contentsline {figure}{\numberline {6.3}{\ignorespaces The function \textbf {reads} information from the caching pointer, rather than the pointer where the solvers compute the results.}}{57}{figure.caption.46}% -\contentsline {figure}{\numberline {6.4}{\ignorespaces The new \textit {updateInteg} function gives to the solver functions access to the region with the cached data.}}{58}{figure.caption.47}% -\contentsline {figure}{\numberline {6.5}{\ignorespaces Caching changes the direction of walking through the iteration axis. It also removes an entire pass through the previous iterations.}}{59}{figure.caption.48}% -\contentsline {figure}{\numberline {6.6}{\ignorespaces By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation.}}{63}{figure.caption.51}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {7.1}{\ignorespaces Resettable counter in hardware, inspired by Levent's works~\cite {levent2000, levent2002}.}}{68}{figure.caption.52}% -\contentsline {figure}{\numberline {7.2}{\ignorespaces Diagram of \texttt {createInteg} primitive for intuition..}}{70}{figure.caption.53}% -\contentsline {figure}{\numberline {7.3}{\ignorespaces Results of FFACT are similar to the final version of FACT..}}{73}{figure.caption.54}% +\contentsline {figure}{\numberline {1.1}{\ignorespaces The translation between the world of software and the mathematical description of differential equations are explicit in the final version of \texttt {FACT}.}}{5}{figure.caption.8}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {2.1}{\ignorespaces The combination of these four basic units compose any GPAC circuit (taken from~\cite {Edil2018} with permission).}}{8}{figure.caption.9}% +\contentsline {figure}{\numberline {2.2}{\ignorespaces Polynomial circuits resembles combinational circuits, in which the circuit respond instantly to changes on its inputs (taken from~\cite {Edil2018} with permission).}}{9}{figure.caption.10}% +\contentsline {figure}{\numberline {2.3}{\ignorespaces Types are not just labels; they enhance the manipulated data with new information. Their difference in shape can work as the interface for the data.}}{10}{figure.caption.11}% +\contentsline {figure}{\numberline {2.4}{\ignorespaces Functions' signatures are contracts; they purespecify which shape the input information has as well as which shape the output information will have.}}{10}{figure.caption.11}% +\contentsline {figure}{\numberline {2.5}{\ignorespaces Sum types can be understood in terms of sets, in which the members of the set are available candidates for the outer shell type. Parity and possible values in digital states are examples.}}{11}{figure.caption.12}% +\contentsline {figure}{\numberline {2.6}{\ignorespaces Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textbf {record} alongside with the constructor, where the labels for each member inside it are explicit.}}{11}{figure.caption.13}% +\contentsline {figure}{\numberline {2.7}{\ignorespaces Depending on the application, different representations of the same structure need to used due to the domain of interest and/or memory constraints.}}{12}{figure.caption.14}% +\contentsline {figure}{\numberline {2.8}{\ignorespaces The minimum requirement for the \texttt {Ord} typeclass is the $<=$ operator, meaning that the functions $<$, $<=$, $>$, $>=$, \texttt {max} and \texttt {min} are now unlocked for the type \texttt {ClockTime} after the implementation. Typeclasses can be viewed as a third dimension in a type.}}{12}{figure.caption.15}% +\contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above is common.}}{13}{figure.caption.16}% +\contentsline {figure}{\numberline {2.10}{\ignorespaces The initial value is used as a starting point for the procedure. The algorithm continues until the time of interest is reached in the unknown function. Due to its large time step, the final answer is really far-off from the expected result.}}{15}{figure.caption.17}% +\contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textbf {function}, in which providing a floating point value as time returns another value as outcome.}}{15}{figure.caption.18}% +\contentsline {figure}{\numberline {2.12}{\ignorespaces The \texttt {Parameters} type represents a given moment in time, carrying over all the necessary information to execute a solver step until the time limit is reached. Some useful typeclasses are being derived to these types, given that Haskell is capable of inferring the implementation of typeclasses in simple cases.}}{16}{figure.caption.19}% +\contentsline {figure}{\numberline {2.13}{\ignorespaces The \texttt {CT} type is a function of from time related information to an arbitrary potentially effectful outcome value.}}{17}{figure.caption.20}% +\contentsline {figure}{\numberline {2.14}{\ignorespaces The \texttt {CT} type can leverage monad transformers in Haskell via \texttt {Reader} in combination with \texttt {IO}.}}{17}{figure.caption.21}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {3.1}{\ignorespaces Given a parametric record \texttt {ps} and a dynamic value \texttt {da}, the \textit {fmap} functor of the \texttt {CT} type applies the former to the latter. Because the final result is wrapped inside the \texttt {IO} shell, a second \textit {fmap} is necessary.}}{19}{figure.caption.22}% +\contentsline {figure}{\numberline {3.2}{\ignorespaces With the \texttt {Applicative} typeclass, it is possible to cope with functions inside the \texttt {CT} type. Again, the \textit {fmap} from \texttt {IO} is being used in the implementation.}}{20}{figure.caption.23}% +\contentsline {figure}{\numberline {3.3}{\ignorespaces The $>>=$ operator used in the implementation is the \textit {bind} from the \texttt {IO} shell. This indicates that when dealing with monads within monads, it is frequent to use the implementation of the internal members.}}{21}{figure.caption.24}% +\contentsline {figure}{\numberline {3.4}{\ignorespaces The typeclass \texttt {MonadIO} transforms a given value wrapped in \texttt {IO} into a different monad. In this case, the parameter \texttt {m} of the function is the output of the \texttt {CT} type.}}{21}{figure.caption.25}% +\contentsline {figure}{\numberline {3.5}{\ignorespaces The ability of lifting numerical values to the \texttt {CT} type resembles three FF-GPAC analog circuits: \texttt {Constant}, \texttt {Adder} and \texttt {Multiplier}.}}{22}{figure.caption.26}% +\contentsline {figure}{\numberline {3.6}{\ignorespaces State Machines are a common abstraction in computer science due to its easy mapping between function calls and states. Memory regions and peripherals are embedded with the idea of a state, not only pure functions. Further, side effects can even act as the trigger to move from one state to another, meaning that executing a simple function can do more than return a value. Its internal guts can significantly modify the state machine.}}{23}{figure.caption.27}% +\contentsline {figure}{\numberline {3.7}{\ignorespaces The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt {CT} and \texttt {Integrator} types match the four basic units.}}{28}{figure.caption.28}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {4.1}{\ignorespaces The integrator functions are essential to create and interconnect combinational and feedback-dependent circuits.}}{32}{figure.caption.29}% +\contentsline {figure}{\numberline {4.2}{\ignorespaces The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description.}}{32}{figure.caption.30}% +\contentsline {figure}{\numberline {4.3}{\ignorespaces Because the list implements the \texttt {Traversable} typeclass, it allows this type to use the \textit {traverse} and \textit {sequence} functions, in which both are related to changing the internal behaviour of the nested structures.}}{33}{figure.caption.31}% +\contentsline {figure}{\numberline {4.4}{\ignorespaces A \textbf {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{33}{figure.caption.32}% +\contentsline {figure}{\numberline {4.5}{\ignorespaces When building a model for simulation, the above pipeline is always used, from both points of view. The operations with meaning, i.e., the ones in the \texttt {Semantics} pipeline, are mapped to executable operations in the \texttt {Operational} pipeline, and vice-versa.}}{34}{figure.caption.33}% +\contentsline {figure}{\numberline {4.6}{\ignorespaces Using only FF-GPAC's basic units and their composition rules, it's possible to model the Lorenz Attractor example.}}{37}{figure.caption.34}% +\contentsline {figure}{\numberline {4.7}{\ignorespaces After \textit {createInteg}, this record is the final image of the integrator. The function \textit {initialize} gives us protecting against wrong records of the type \texttt {Parameters}, assuring it begins from the first iteration, i.e., $t_0$.}}{38}{figure.caption.35}% +\contentsline {figure}{\numberline {4.8}{\ignorespaces After \textit {readInteg}, the final floating point values is obtained by reading from memory a computation and passing to it the received parameters record. The result of this application, $v$, is the returned value.}}{39}{figure.caption.36}% +\contentsline {figure}{\numberline {4.9}{\ignorespaces The \textit {updateInteg} function only does side effects, meaning that only affects memory. The internal variable \texttt {c} is a pointer to the computation \textit {itself}, i.e., the computation being created references this exact procedure.}}{39}{figure.caption.37}% +\contentsline {figure}{\numberline {4.10}{\ignorespaces After setting up the environment, this is the final depiction of an independent variable. The reader $x$ reads the values computed by the procedure stored in memory, a second-order Runge-Kutta method in this case.}}{40}{figure.caption.38}% +\contentsline {figure}{\numberline {4.11}{\ignorespaces The Lorenz's Attractor example has a very famous butterfly shape from certain angles and constant values in the graph generated by the solution of the differential equations..}}{41}{figure.caption.39}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {5.1}{\ignorespaces During simulation, functions change the time domain to the one that better fits certain entities, such as the \texttt {Solver} and the driver. The image is heavily inspired by a figure in~\cite {Edil2017}.}}{42}{figure.caption.40}% +\contentsline {figure}{\numberline {5.2}{\ignorespaces Updated auxiliary types for the \texttt {Parameters} type.}}{44}{figure.caption.41}% +\contentsline {figure}{\numberline {5.3}{\ignorespaces Linear interpolation is being used to transition us back to the continuous domain..}}{47}{figure.caption.42}% +\contentsline {figure}{\numberline {5.4}{\ignorespaces The new \textit {updateInteg} function add linear interpolation to the pipeline when receiving a parametric record.}}{48}{figure.caption.43}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {6.1}{\ignorespaces With just a few iterations, the exponential behaviour of the implementation is already noticeable.}}{50}{figure.caption.45}% +\contentsline {figure}{\numberline {6.2}{\ignorespaces The new \textit {createInteg} function relies on interpolation composed with memoization. Also, this combination \textbf {produces} results from the computation located in a different memory region, the one pointed by the \texttt {computation} pointer in the integrator.}}{56}{figure.caption.47}% +\contentsline {figure}{\numberline {6.3}{\ignorespaces The function \textbf {reads} information from the caching pointer, rather than the pointer where the solvers compute the results.}}{57}{figure.caption.48}% +\contentsline {figure}{\numberline {6.4}{\ignorespaces The new \textit {updateInteg} function gives to the solver functions access to the region with the cached data.}}{58}{figure.caption.49}% +\contentsline {figure}{\numberline {6.5}{\ignorespaces Caching changes the direction of walking through the iteration axis. It also removes an entire pass through the previous iterations.}}{59}{figure.caption.50}% +\contentsline {figure}{\numberline {6.6}{\ignorespaces By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation.}}{63}{figure.caption.53}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {7.1}{\ignorespaces Resettable counter in hardware, inspired by Levent's works~\cite {levent2000, levent2002}.}}{68}{figure.caption.54}% +\contentsline {figure}{\numberline {7.2}{\ignorespaces Diagram of \texttt {createInteg} primitive for intuition..}}{70}{figure.caption.55}% +\contentsline {figure}{\numberline {7.3}{\ignorespaces Results of FFACT are similar to the final version of FACT..}}{73}{figure.caption.56}% \addvspace {10\p@ } \addvspace {10\p@ } \babel@toc {american}{}\relax diff --git a/doc/MastersThesis/thesis.toc b/doc/MastersThesis/thesis.toc index ee6b495..c8c9fdb 100644 --- a/doc/MastersThesis/thesis.toc +++ b/doc/MastersThesis/thesis.toc @@ -42,7 +42,7 @@ \contentsline {section}{\numberline {8.3}Final Thoughts}{77}{section.8.3}% \contentsline {chapter}{\numberline {9}Appendix}{78}{chapter.9}% \contentsline {section}{\numberline {9.1}Literate Programming}{78}{section.9.1}% -\contentsline {chapter}{References}{80}{section*.55}% +\contentsline {chapter}{References}{80}{section*.57}% \babel@toc {american}{}\relax \babel@toc {american}{}\relax \babel@toc {american}{}\relax From 7df5cd935ef2a9d6a7868512bc708c637dd3e17f Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Sun, 16 Mar 2025 20:04:12 -0300 Subject: [PATCH 02/10] Add qualify review comments --- doc/MastersThesis/Lhs/Appendix.lhs | 2 +- doc/MastersThesis/Lhs/Caching.lhs | 30 +++++++++---------- doc/MastersThesis/Lhs/Conclusion.lhs | 25 +++++++--------- doc/MastersThesis/Lhs/Design.lhs | 28 ++++++++--------- doc/MastersThesis/Lhs/Enlightenment.lhs | 32 ++++++++++---------- doc/MastersThesis/Lhs/Fixing.lhs | 11 +++++-- doc/MastersThesis/Lhs/Implementation.lhs | 38 ++++++++++++------------ doc/MastersThesis/Lhs/Interpolation.lhs | 16 +++++----- doc/MastersThesis/Lhs/Introduction.lhs | 35 +++++++++++++--------- doc/MastersThesis/thesis.lhs | 2 +- doc/MastersThesis/thesis.lof | 10 +++---- doc/MastersThesis/thesis.toc | 21 ++++++------- 12 files changed, 130 insertions(+), 120 deletions(-) diff --git a/doc/MastersThesis/Lhs/Appendix.lhs b/doc/MastersThesis/Lhs/Appendix.lhs index 5274d8f..feeae86 100644 --- a/doc/MastersThesis/Lhs/Appendix.lhs +++ b/doc/MastersThesis/Lhs/Appendix.lhs @@ -1,7 +1,7 @@ \section{Literate Programming} -This thesis leverages~\footnote{\href{https://en.wikipedia.org/wiki/Literate_programming}{\textcolor{blue}{Literate Programming}}.}, a concept +This dissertation made use of literate programming~\footnote{\href{https://en.wikipedia.org/wiki/Literate_programming}{\textcolor{blue}{Literate Programming}}.}, a concept introduced by Donald Knuth~\cite{knuth1992}. Hence, this thesis can be executed using the same source files that the \texttt{PDF} is created This process requires the following dependencies: diff --git a/doc/MastersThesis/Lhs/Caching.lhs b/doc/MastersThesis/Lhs/Caching.lhs index 54d328c..89fe112 100644 --- a/doc/MastersThesis/Lhs/Caching.lhs +++ b/doc/MastersThesis/Lhs/Caching.lhs @@ -66,7 +66,7 @@ Chapter 5, \textit{Travelling across Domains}, leveraged a major concern with th \section{Performance} -The simulations executed in \texttt{FACT} take too long to run. For instance, to execute the Lorenz's Attractor example using the second-order Runge-Kutta method with an unrealistic time step size for real simulations (time step of $1$ second), the simulator can take around \textbf{10 seconds} to compute 0 to 5 seconds of the physical system with a testbench using a \texttt{Ryzen 7 5700X} AMD processor and 128GB of RAM. Increasing this interval shows an exponential growth in execution time, as depicted by Table \ref{tab:execTimes} and by Figure \ref{fig:graph1} (values obtained after the interpolation tweak). Although the memory use is also problematic, it is hard to reason about those numbers due to Haskell's \textbf{garbage collector}~\footnote{Garbage Collector \href{https://wiki.haskell.org/GHC/Memory\_Management}{\textcolor{blue}{wiki page}}.}, a memory manager that deals with Haskell's \textbf{immutability}. Thus, the memory values serve just to solidify the notion that \texttt{FACT} is inneficient, showing an exponentinal growth in resource use, which makes it impractical to execute longer simulations and diminishes the usability of the proposed software. +The simulations executed in \texttt{FACT} take too long to run. For instance, to execute the Lorenz's Attractor example using the second-order Runge-Kutta method with an unrealistic time step size for real simulations (time step of $1$ second), the simulator can take around \textit{10 seconds} to compute 0 to 5 seconds of the physical system with a testbench using a \texttt{Ryzen 7 5700X} AMD processor and 128GB of RAM. Increasing this interval shows an exponential growth in execution time, as depicted by Table \ref{tab:execTimes} and by Figure \ref{fig:graph1} (values obtained after the interpolation tweak). Although the memory use is also problematic, it is hard to reason about those numbers due to Haskell's \textit{garbage collector}~\footnote{Garbage Collector \href{https://wiki.haskell.org/GHC/Memory\_Management}{\textcolor{blue}{wiki page}}.}, a memory manager that deals with Haskell's \textit{immutability}. Thus, the memory values serve just to solidify the notion that \texttt{FACT} is inneficient, showing an exponentinal growth in resource use, which makes it impractical to execute longer simulations and diminishes the usability of the proposed software. \begin{table}[H] \centering @@ -91,7 +91,7 @@ Total of Iterations & Execution Time (milliseconds) & Consumed Memory (KB) \\ \ \section{The Saving Strategy} -Before explaining the solution, it is worth describing \textbf{why} and \textbf{where} this problem arises. First, we need to take a look back onto the solvers' functions, such as the \textit{integEuler} function, introduced in chapter 3, \textit{Effectful Integrals}: +Before explaining the solution, it is worth describing \textit{why} and \textit{where} this problem arises. First, we need to take a look back onto the solvers' functions, such as the \textit{integEuler} function, introduced in chapter 3, \textit{Effectful Integrals}: \begin{spec} integEuler :: CT Double @@ -113,7 +113,7 @@ integEuler diff i y = do return v \end{spec} -From chapter 3, we know that lines 10 to 13 serve the purpose of creating a new parametric record to execute a new solver step for the \textbf{previous} iteration, in order to calculate the current one. From chapter 4, this code section turned out to be where the implicit recursion came in, because the current iteration needs to calculate the previous one. Effectively, this means that for \textbf{all} iterations, \textbf{all} previous steps from each one needs to be calculated. The problem is now clear: unnecessary computations are being made for all iterations, because the same solvers steps are not being saved for future steps, although these values do \textbf{not} change. In other words, to calculate step 3 of the solver, steps 1 and 2 are the same to calculate step 4 as well, but these values are being lost during the simulation. +From chapter 3, we know that lines 10 to 13 serve the purpose of creating a new parametric record to execute a new solver step for the \textit{previous} iteration, in order to calculate the current one. From chapter 4, this code section turned out to be where the implicit recursion came in, because the current iteration needs to calculate the previous one. Effectively, this means that for \textit{all} iterations, \textit{all} previous steps from each one needs to be calculated. The problem is now clear: unnecessary computations are being made for all iterations, because the same solvers steps are not being saved for future steps, although these values do \textit{not} change. In other words, to calculate step 3 of the solver, steps 1 and 2 are the same to calculate step 4 as well, but these values are being lost during the simulation. To estimate how this lack of optimization affects performance, we can calculate how many solver steps will be executed to simulate theLorenz's Attractor example used in chapter 4, \textit{Execution Walkthrough}. The Table \ref{tab:solverSteps} shows the total number of solver steps needed per iteration simulating the Lorenz example with the Euler method. In addition, the amount of steps also increase depending on which solver method is being used, given that in the higher order Runge-Kutta methods, multiple stages count as a new step as well. @@ -129,16 +129,16 @@ Iteration & Total Solver Steps \\ \hline 5 & 15 \\ \hline 6 & 21 \\ \hline \end{tabular} -\caption{\label{tab:solverSteps}Because the previous solver steps are not saved, the total number of steps \textbf{per iteration} starts to accumullate following the numerical sequence of \textbf{triangular numbers} when using the Euler method.} +\caption{\label{tab:solverSteps}Because the previous solver steps are not saved, the total number of steps \textit{per iteration} starts to accumullate following the numerical sequence of \textit{triangular numbers} when using the Euler method.} \end{table} -This is the cause of the imense hit in performance. However, it also clarifies the solution: if the previous solver steps are saved, the next iterations don't need to re-compute them in order to continue. In the computer domain, the act of saving previous steps that do not change is called \textbf{memoization} and it is one form to execute \textbf{caching}. This optimization technique stores the values in a register or memory region and, instead of the process starts calculating the result again, it consults this region to quickly obtain the answer. +This is the cause of the imense hit in performance. However, it also clarifies the solution: if the previous solver steps are saved, the next iterations don't need to re-compute them in order to continue. In the computer domain, the act of saving previous steps that do not change is called \textit{memoization} and it is one form to execute \textit{caching}. This optimization technique stores the values in a register or memory region and, instead of the process starts calculating the result again, it consults this region to quickly obtain the answer. \section{Tweak II: Memoization} -The first tweak, \textit{Memoization}, alters the \texttt{Integrator} type. The integrator will now have a pointer to the memory region that stores the previous computed values, meaning that before executing a new computation, it will consult this region first. Because the process is executed in a \textbf{sequential} manner, it is guaranteed that the previous result will be used. Thus, the accumulation of the solver steps will be addressed, and the amount of steps will be equal to the amount of iterations times how many stages the solver method uses. +The first tweak, \textit{Memoization}, alters the \texttt{Integrator} type. The integrator will now have a pointer to the memory region that stores the previous computed values, meaning that before executing a new computation, it will consult this region first. Because the process is executed in a \textit{sequential} manner, it is guaranteed that the previous result will be used. Thus, the accumulation of the solver steps will be addressed, and the amount of steps will be equal to the amount of iterations times how many stages the solver method uses. -The \textit{memo} function creates this memory region for storing values, as well as providing read access to it. This is the only function in \texttt{FACT} that uses a \textit{constraint}, i.e., it restricts the parametric types to the ones that have implemented the requirement. In our case, this function requires that the internal type \texttt{CT} dependency has implemented the \texttt{UMemo} typeclass. Because this typeclass is too complicated to be in the scope of this project, we will settle with the following explanation: it is required that the parametric values are capable of being contained inside an \textbf{mutable} array, which is the case for our \texttt{Double} values. As dependencies, the \textit{memo} function receives the computation, as well as the interpolation function that is assumed to be used, in order to attenuate the domain problem described in the previous chapter. This means that at the end, the final result will be piped to the interpolation function. +The \textit{memo} function creates this memory region for storing values, as well as providing read access to it. This is the only function in \texttt{FACT} that uses a \textit{constraint}, i.e., it restricts the parametric types to the ones that have implemented the requirement. In our case, this function requires that the internal type \texttt{CT} dependency has implemented the \texttt{UMemo} typeclass. Because this typeclass is too complicated to be in the scope of this project, we will settle with the following explanation: it is required that the parametric values are capable of being contained inside an \textit{mutable} array, which is the case for our \texttt{Double} values. As dependencies, the \textit{memo} function receives the computation, as well as the interpolation function that is assumed to be used, in order to attenuate the domain problem described in the previous chapter. This means that at the end, the final result will be piped to the interpolation function. \begin{code} memo :: UMemo e => (CT e -> CT e) -> CT e -> CT (CT e) @@ -182,13 +182,13 @@ memo interpolate m = do The function starts by getting how many iterations will occur in the simulation, as well as how many stages the chosen method uses (lines 5 to 7). This is used to pre-allocate the minimum amount of memory required for the execution (line 8). This mutable array is two-dimensional and can be viewed as a table in which the number of iterations and stages determine the number of rows and columns. Pointers to iterate accross the table are declared as \textit{nref} and \textit{stref} (lines 9 and 10), to read iteration and stage values respectively. The code block from line 11 to line 36 delimit a procedure or computation that will only be used when needed, and it is being called at the end of the \textit{memo} function (line 37). -The next step is to follow the exection of this internal function. From line 13 to line 17, auxiliar "variables", i.e., labels to read information, are created to facilitate manipulation of the solver (\texttt{sl}), interval (\texttt{iv}), current iteration (\texttt{n}), current stage (\texttt{st}) and the final stage used in a solver step (\texttt{stu}). The definition of \textit{loop}, which starts at line 18 and closes at line 33, uses all the previously created labels. The conditional block (line 19 to 33) will store in the pre-allocated memory region the computed values and, because they are stored in a \textbf{sequential} way, the stop condition of the loop is one of the following: the iteration counter of the loop (\texttt{n'}) surpassed the current iteration \textbf{or} the iteration counter matches the current iteration \textbf{and} the stage counter (\texttt{st'}) reached the ceiling of stages of used solver method (line 19). When the loop stops, it \textbf{reads} from the allocated array the value of interest (line 21), given that it is guaranteed that is already in memory. If this condition is not true, it means that further iterations in the loop need to occur in one of the two axis, iteration or stage. +The next step is to follow the exection of this internal function. From line 13 to line 17, auxiliar "variables", i.e., labels to read information, are created to facilitate manipulation of the solver (\texttt{sl}), interval (\texttt{iv}), current iteration (\texttt{n}), current stage (\texttt{st}) and the final stage used in a solver step (\texttt{stu}). The definition of \textit{loop}, which starts at line 18 and closes at line 33, uses all the previously created labels. The conditional block (line 19 to 33) will store in the pre-allocated memory region the computed values and, because they are stored in a \textit{sequential} way, the stop condition of the loop is one of the following: the iteration counter of the loop (\texttt{n'}) surpassed the current iteration \textit{or} the iteration counter matches the current iteration \textit{and} the stage counter (\texttt{st'}) reached the ceiling of stages of used solver method (line 19). When the loop stops, it \textit{reads} from the allocated array the value of interest (line 21), given that it is guaranteed that is already in memory. If this condition is not true, it means that further iterations in the loop need to occur in one of the two axis, iteration or stage. The first step towards that goal is to save the value of the current iteration and stage into memory. The continuous machine \texttt{m}, received as a dependency in line 3, is used to compute a new result with the current counters for iteration and stage (lines 23 to 26). Then, this new value is written into the array (line 27). The condition in line 28 checks if the current stage already achieved its maximum possible value. In that case, the counters for stage and iteration counters will be reset to the first stage (line 29) of the next iteration (line 30) respectively, and the loop should continue (line 31). Otherwise, we need to advance to the next stage within the same iteration and an updated stage (line 32). The loop should continue with the same iteration counter but with the stage counter incremented (lines 32 and 33). Lines 34 to 36 are the trigger to the beginning of the loop, with \textit{nref} and \textit{stref} being read. These values set the initial values for the counters used in the \textit{loop} function, and both of their values start at zero (lines 10 and 11). All computations related to the \textit{loop} function will only be called when the \textit{r} function is called. Further, all of these impure computations (lines 12 to 36) compose the definition of \textit{r} (line 12), which is being returned in line 37 combined with the interpolation function \textit{tr} and being wrapped with an extra \texttt{CT} shell via the \textit{pure} function (provided by the \texttt{Applicative} typeclass). -With this function on-hand, it remains to couple it to the \texttt{Integrator} type, meaning that \textbf{all} integrator functions need to be aware of this new caching strategy. First and foremost, a pointer to this memory region needs to be added to the integrator type itself: +With this function on-hand, it remains to couple it to the \texttt{Integrator} type, meaning that \textit{all} integrator functions need to be aware of this new caching strategy. First and foremost, a pointer to this memory region needs to be added to the integrator type itself: \newpage @@ -199,7 +199,7 @@ data Integrator = Integrator { initial :: CT Double, } \end{code} -Next, two other functions need to be adapted: \textit{createInteg} and \textit{readInteg}. In the former function, the new pointer will be used, and it points to the region where the mutable array will be allocated. In the latter, instead of reading from the computation itself, the read-only pointer will be looking at the \textbf{cached} version. These differences will be illustrated by using the same integrator and state variables used in the Lorenz's Attractor example, detailed in chapter 4, \textit{Execution Walkthrough}. +Next, two other functions need to be adapted: \textit{createInteg} and \textit{readInteg}. In the former function, the new pointer will be used, and it points to the region where the mutable array will be allocated. In the latter, instead of reading from the computation itself, the read-only pointer will be looking at the \textit{cached} version. These differences will be illustrated by using the same integrator and state variables used in the Lorenz's Attractor example, detailed in chapter 4, \textit{Execution Walkthrough}. The main difference in the updated version of the \textit{createInteg} function is the inclusion of the new pointer that reads the cached memory (lines 4 to 7). The pointer \texttt{computation}, which will be changed by \textit{updateInteg} in a model to the differential equation, is being read in lines 8 to 11 and piped with interpolation and memoization in line 12. This approach maintains the interpolation, justified in the previous chapter, and adds the aforementioned caching strategy. Finally, the final result is written in the memory region pointed by the caching pointer (line 13). @@ -227,7 +227,7 @@ createInteg i = do \begin{center} \includegraphics[width=0.95\linewidth]{MastersThesis/img/NewInteg} \end{center} -\caption{The new \textit{createInteg} function relies on interpolation composed with memoization. Also, this combination \textbf{produces} results from the computation located in a different memory region, the one pointed by the \texttt{computation} pointer in the integrator.} +\caption{The new \textit{createInteg} function relies on interpolation composed with memoization. Also, this combination \textit{produces} results from the computation located in a different memory region, the one pointed by the \texttt{computation} pointer in the integrator.} \label{fig:createInteg} \end{figure} @@ -241,7 +241,7 @@ readInteg = join . liftIO . readIORef . cache \begin{center} \includegraphics[width=0.95\linewidth]{MastersThesis/img/ReadInteg} \end{center} -\caption{The function \textbf{reads} information from the caching pointer, rather than the pointer where the solvers compute the results.} +\caption{The function \textit{reads} information from the caching pointer, rather than the pointer where the solvers compute the results.} \label{fig:readInteg} \end{figure} @@ -276,9 +276,9 @@ The solver functions, \textit{integEuler}, \textit{integRK2} and \textit{integRK \section{A change in Perspective} -Before the implementation of the described caching strategy, \textbf{all} the solver methods rely on implicit recursion to get the previous iteration value. Thus, performance was degraded due to this potentially long stack call. After caching, this mechanism is not only faster, but it \textbf{completely} changes how the solvers will get these past values. +Before the implementation of the described caching strategy, \textit{all} the solver methods rely on implicit recursion to get the previous iteration value. Thus, performance was degraded due to this potentially long stack call. After caching, this mechanism is not only faster, but it \textit{completely} changes how the solvers will get these past values. -For instance, when using the function \textit{runCTFinal} as the driver, the simulation will start by the last iteration. Without caching, the solver would go from the current iteration to the previous ones, until it reaches the base case with the initial condition and starts backtracking the recursive calls to compute the result of the final iteration. On the other hand, with the caching strategy, the \textit{memo} function goes in the \textbf{opposite} direction: it starts from the beginning, with the counters at zero, and then incrementally proceeds until it reaches the desired iteration. +For instance, when using the function \textit{runCTFinal} as the driver, the simulation will start by the last iteration. Without caching, the solver would go from the current iteration to the previous ones, until it reaches the base case with the initial condition and starts backtracking the recursive calls to compute the result of the final iteration. On the other hand, with the caching strategy, the \textit{memo} function goes in the \textit{opposite} direction: it starts from the beginning, with the counters at zero, and then incrementally proceeds until it reaches the desired iteration. Figure \ref{fig:memoDirection} depicts this stark difference in approach when using memoization in \texttt{FACT}. Instead of iterating through all iterations two times, one backtracking until the base case and another one to accumulate all computed values, the new version starts from the base case, i.e., at iteration 0, and stops when achieves the desired iteration, saving all the values along the way. @@ -300,7 +300,7 @@ exampleModel = sequence [x, y] \end{spec} -The caching strategy assumes that the created mutable array will be available for the entire simulation. However, the proposed models will \textbf{always} discard the table created by the \textit{createInteg} function due to the garbage collector~\footnote{Garbage Collector \href{https://wiki.haskell.org/GHC/Memory\_Management}{\textcolor{blue}{wiki page}}.}, after the \textit{sequence} function. Even worse, the table will be created again each time the model is being called and a parametric record is being provided, which happens when using the driver. Thus, the proposed solution to address this problem is to update the \texttt{Model} alias to a \textbf{function} of the model. This can be achieved by \textbf{wrapping} the state vector with a the \texttt{CT} type, i.e., wrapping the model using the function \textit{pure} or \textit{return}. In this manner, the computation will be "placed" as a side effect of the \texttt{IO} monad and Haskell's memory management system will not remove the table used for caching, in the first computation. So, the following code is the new type alias, alongside the previous example model using the \textit{return} function: +The caching strategy assumes that the created mutable array will be available for the entire simulation. However, the proposed models will \textit{always} discard the table created by the \textit{createInteg} function due to the garbage collector~\footnote{Garbage Collector \href{https://wiki.haskell.org/GHC/Memory\_Management}{\textcolor{blue}{wiki page}}.}, after the \textit{sequence} function. Even worse, the table will be created again each time the model is being called and a parametric record is being provided, which happens when using the driver. Thus, the proposed solution to address this problem is to update the \texttt{Model} alias to a \textit{function} of the model. This can be achieved by \textit{wrapping} the state vector with a the \texttt{CT} type, i.e., wrapping the model using the function \textit{pure} or \textit{return}. In this manner, the computation will be "placed" as a side effect of the \texttt{IO} monad and Haskell's memory management system will not remove the table used for caching, in the first computation. So, the following code is the new type alias, alongside the previous example model using the \textit{return} function: \begin{spec} type Model a = CT (CT a) diff --git a/doc/MastersThesis/Lhs/Conclusion.lhs b/doc/MastersThesis/Lhs/Conclusion.lhs index 1146cd1..18db9c8 100644 --- a/doc/MastersThesis/Lhs/Conclusion.lhs +++ b/doc/MastersThesis/Lhs/Conclusion.lhs @@ -1,31 +1,28 @@ -Chapters 2 and 3 explained the relationship between software, FF-GPAC and the mathematical world of differential equations. As a follow-up, Chapter 4 raised intuition and practical understanding of \texttt{FACT} via a detailed walkthrough of an example. Chapters 5, 6, and 7 identified some problems with the current implementation, such as lack of performance, the discrete time issue, DSL's familiarity, and addressed both problems via caching and interpolation. This chapter, \textit{Conclusion}, draws limitations, future improvements that can bring \texttt{FACT} to a higher level of abstraction and some final conclusions about the project. +Chapters 2 and 3 explained the relationship between software, FF-GPAC and the mathematical world of differential equations. As a follow-up, Chapter 4 raised intuition and practical understanding of \texttt{FACT} via a detailed walkthrough of an example. Chapters 5, 6, and 7 identified some problems with the current implementation, such as lack of performance, the discrete time issue, DSL's conciseness, and addressed both problems via caching and interpolation. This chapter, \textit{Conclusion}, draws limitations, future improvements that can bring \texttt{FACT} to a higher level of abstraction and some final conclusions about the project. -\section{Limitations} +\section{Future Work} -One of the main concerns is the \textbf{correctness} of \texttt{FACT} between its specification and its final implementation, i.e., refinement. Shannon's GPAC concept acted as the specification of the project, whilst the proposed software attempted to implement it. The criteria used to verify that the software fulfilled its goal were by using it for simulation and via code inspection, both of which are based on human analysis. This connection, however, was \textbf{not} formally verified. Thus, \texttt{FACT} can be a threat to validity if a future formal verification comes up and checks that the parallel between those two can't be guaranteed. +\subsection{Formalism} -Further, there is also an issue to regards to \textbf{validation}. In order to know that the mathematical description of the problem is being correctly mapped onto a model representation some formal work needs to be done. This was not explored, and it was considered out of the scope of the thesis. However, such aspect dictates if the specification for further implementation is actually correct and describes its mathematical counterpart. So, checking for validation is just as important as verifying refinement. +One of the main concerns is the \textit{correctness} of \texttt{FACT} between its specification and its final implementation, i.e., refinement. Shannon's GPAC concept acted as the specification of the project, whilst the proposed software attempted to implement it. The criteria used to verify that the software fulfilled its goal were by using it for simulation and via code inspection, both of which are based on human analysis. This connection, however, was \textit{not} formally verified --- no model checking tools were used for its validation. In order to know that the mathematical description of the problem is being correctly mapped onto a model representation some formal work needs to be done. This was not explored, and it was considered out of the scope for this work. -This lack of formalism extends to the typeclasses as well. The programming language of choice, Haskell, does \textbf{not} provide any proofs that the created types actually follow the typeclasses' properties, even if the requested functions type check. This burden is on the developer to manually write down such proofs, a non-explored aspect of this work. +This lack of formalism extends to the typeclasses as well. The programming language of choice, Haskell, does \textit{not} provide any proofs that the created types actually follow the typeclasses' properties --- something that can be achieved with \textit{dependently typed} languages and/or tools such as Rocq, PVS, Agda, Idris and Lean. In Haskell, this burden is on the developer to manually write down such proofs, a non-explored aspect of this work. Hence, this work can be better understood as a \textit{proof of concept} for FFACT, and one potential improvement would be to port it to more powerful and specialized programming languages, such as the ones mentioned earlier. Because FP is highly encouraged in those languages, such port would not be a major roadblock. Thus, these tools would assure a solid mappping between the mathematical the description of the problem, GPAC's specification and FFACT's implementation, including the +use of chosen typeclasses. -As explained in chapters 1 and 2, there are some extensions that increase the capabilities of Shannon's original GPAC model. One of these extensions, FF-GPAC, was the one chosen to be modeled via software. However, there are other extensions that not only expand the types of functions that can be modeled, e.g., hypertranscendental functions, but also explore new properties, such as Turing universitality~\cite{Graca2004, Graca2016}. The proposed software didn't touch on those enhancements and restricted the set of functions to only algebraic functions. +\subsection{Extensions} -Finally, there is the language itself, Haskell. Although Haskell's type system allowed a great mapping between the numerical methods and its nuances to created types, its simplicity started to fall apart when impurity came into picture. The side effect overhead makes \texttt{FACT} hard to reason about in terms of maintenance, especially for newcomers that intent to expand the software's functionalities. +As explained in chapters 1 and 2, there are some extensions that increase the capabilities of Shannon's original GPAC model. One of these extensions, FF-GPAC, was the one chosen to be modeled via software. However, there are other extensions that not only expand the types of functions that can be modeled, e.g., hypertranscendental functions, but also explore new properties, such as Turing universitality~\cite{Graca2004, Graca2016}. The proposed software didn't touch on those enhancements and restricted the set of functions to only algebraic functions. More recent extensions of GPAC should also be explored to simulate an even broader set of functions present in the continuous time domain. -\section{Future Improvements} +In regards to numerical methods, one of the immediate improvements would be to use \textit{adaptive} size for the solver time step that \textit{change dynamically} in run time. This strategy controls the errors accumulated when using the derivative by adapting the size of the time step. Hence, it starts backtracking previous steps with smaller time steps until some error threshold is satisfied, thus providing finer and granular control to the numerical methods, coping with approximation errors due to larger time steps. -There are solutions to mitigate the problems presented in the previous section. First, to address refinement, the simulation could be assessed by continuous domain specialists. Also, proof-assistant tools, such as Rocq and PVS, could be used to re-write \texttt{FACT} with a proper formal basis, hence establishing a solid map between the mathematical description, specification and implementation. Further, the same tools can leverage the correctness of the typeclasses' implementation, via demonstrating that it assures the axioms and properties demanded by each typeclass. More recent extensions of GPAC should also be explored to simulate an even broader set of functions present in the continuous time domain. +\subsection{Refactoring} -In regards to numerical methods, one of the immediate improvements would be to use \textbf{adaptive} size for the solver time step that \textbf{change dynamically} in run time. This strategy controls the errors accumulated when using the derivative by adapting the size of the time step. Hence, it starts backtracking previous steps with smaller time steps until some error threshold is satisfied, thus providing finer and granular control to the numerical methods, coping with approximation errors due to larger time steps. - -In terms of the used technology, some ideas come to mind related to abstracting out duplicated \textbf{patterns} across the code base. The proposed software used a mix of high level abstractions, such as algebraic types and typeclasses, with some low level abstractions, e.g., explicit memory manipulation. One potential improvement would be to explore an entirely \textbf{pure} based approach, meaning that all the necessary side effects would be handled \textbf{only} by high-level concepts internally, hence decreasing complexity of the software. For instance, the memory allocated via the \texttt{memo} function acts as a \textbf{state} of the numerical solver. Other Haskell abstractions, such as the \texttt{ST} monad~\footnote{\texttt{ST} Monad \href{https://wiki.haskell.org/State\_Monad}{\textcolor{blue}{wiki page}}.}, could be considered for future improvements towards purity. Going even further, given that \texttt{FACT} +In terms of the used technology, some ideas come to mind related to abstracting out duplicated \textit{patterns} across the code base. The proposed software used a mix of high level abstractions, such as algebraic types and typeclasses, with some low level abstractions, e.g., explicit memory manipulation. One potential improvement would be to explore an entirely \textit{pure} based approach, meaning that all the necessary side effects would be handled \textit{only} by high-level concepts internally, hence decreasing complexity of the software. For instance, the memory allocated via the \texttt{memo} function acts as a \textit{state} of the numerical solver. Other Haskell abstractions, such as the \texttt{ST} monad~\footnote{\texttt{ST} Monad \href{https://wiki.haskell.org/State\_Monad}{\textcolor{blue}{wiki page}}.}, could be considered for future improvements towards purity. Going even further, given that \texttt{FACT} already uses \texttt{ReaderT}, a combination of monads could be used to better unify all different behavior -- in Haskell, an option would be to use \textit{monad transformers}. For instance, if the reader and state monads, something like the \texttt{RWS} monad~\footnote{\texttt{RWS} Monad \href{https://hackage.haskell.org/package/mtl-2.2.2/docs/Control-Monad-RWS-Lazy.html}{\textcolor{blue}{hackage documentation}}.}, a monad that combines the monads \texttt{Reader}, \texttt{Writer} and \texttt{ST}, may be the final goal for a completely pure but effective solution. Also, there's GPAC and its mapping to Haskell features. As explained previously, some basic units of GPAC are being modeled by the \texttt{Num} typeclass, present in Haskell's \texttt{Prelude} module. By using more specific and customized numerical typeclasses~\footnote{Examples of \href{https://guide.aelve.com/haskell/alternative-preludes-zr69k1hc}{\textcolor{blue}{alternative preludes}}.}, it might be possible to better express these basic units and take advantage of better performance and convenience that these alternatives provide. -\newpage - \section{Final Thoughts} When Shannon proposed a formal foundation for the Differential Analyzer~\cite{Shannon}, mathematical abstractions were leveraged to model continuous time. However, after the transistor era, a new set of concepts that lack this formal basis was developed, and some of which crippled our capacity of simulating reality. Later, the need for some formalism made a comeback for modeling physical phenomena with abstractions that take \textit{time} into consideration. Models of computation~\cite{LeeModeling, LeeChallenges, LeeComponent, LeeSangiovanni} and the ForSyDe framework~\cite{Sander2017, Seyed2020} are examples of this change in direction. Nevertheless, Shannon's original idea is now being discussed again with some improvements~\cite{Graca2003, Graca2004, Graca2016} and being transposed to high level programming languages in the hybrid system domain~\cite{Edil2018}. diff --git a/doc/MastersThesis/Lhs/Design.lhs b/doc/MastersThesis/Lhs/Design.lhs index b3df5eb..ee24d94 100644 --- a/doc/MastersThesis/Lhs/Design.lhs +++ b/doc/MastersThesis/Lhs/Design.lhs @@ -13,7 +13,7 @@ In the previous chapter, the importance of making a bridge between two different The General Purpose Computer or GPAC is a model for the Differential Analyzer --- a mechanical machine controlled by a human operator~\cite{Graca2016}. This machine is composed by a set of shafts interconnected in such a manner that a given differential equation is expressed by a shaft and other mechanical units transmit their values across the entire machine~\cite{Shannon, Graca2004}. For instance, shafts that represent independent variables directly interact with shafts that depicts dependent variables. The machine is primarily composed by four types of units: gear boxes, adders, integrators and input tables~\cite{Shannon}. These units provide useful operations to the machine, such as multiplication, addition, integration and saving the computed values. The main goal of this machine is to solve ordinary differential equations via numerical solutions. -In order to add a formal basis to the machine, Shannon built the GPAC model, a mathematical model sustained by proofs and axioms~\cite{Shannon}. The end result was a set of rules for which types of equations can be modeled as well as which units are the minimum necessary for modeling them and how they can be combined. All algebraic functions (e.g. quotients of polynomials and irrational algebraic functions) and algebraic-trascendental functions (e.g. exponentials, logarithms, trigonometric, Bessel, elliptic and probability functions) can be modeled using a GPAC circuit~\cite{Edil2018, Shannon}. Moreover, the four preceding mechanical units were renamed and together created the minimum set of \textbf{circuits} for a given GPAC~\cite{Edil2018}. Figure \ref{fig:gpacBasic} portrays these basic units, followed by descriptions of their behaviour, inputs and outputs. +In order to add a formal basis to the machine, Shannon built the GPAC model, a mathematical model sustained by proofs and axioms~\cite{Shannon}. The end result was a set of rules for which types of equations can be modeled as well as which units are the minimum necessary for modeling them and how they can be combined. All algebraic functions (e.g. quotients of polynomials and irrational algebraic functions) and algebraic-trascendental functions (e.g. exponentials, logarithms, trigonometric, Bessel, elliptic and probability functions) can be modeled using a GPAC circuit~\cite{Edil2018, Shannon}. Moreover, the four preceding mechanical units were renamed and together created the minimum set of \textit{circuits} for a given GPAC~\cite{Edil2018}. Figure \ref{fig:gpacBasic} portrays these basic units, followed by descriptions of their behaviour, inputs and outputs. \figuraBib{GPACBasicUnits}{The combination of these four basic units compose any GPAC circuit (taken from~\cite{Edil2018} with permission)}{}{fig:gpacBasic}{width=.95\textwidth}% @@ -40,7 +40,7 @@ During the definition of the DSL, parallels will map the aforementioned basic un \section{The Shape of Information} \label{sec:types} -Types in programming languages represent the format of information. Figure \ref{fig:simpleTypes} illustrates types with an imaginary representation of their shape and Figure \ref{fig:functions} shows how types can be used to restrain which data can be plumbered into and from a function. In the latter image, function \textit{lessThan10} has the type signature \texttt{Int -> Bool}, meaning that it accepts \texttt{Int} data as input and produces \texttt{Bool} data as the output. These types are used to make constratins and add a safety layer in compile time, given that using data with different types as input, e.g, \texttt{Char} or \texttt{Double}, is regarded as a \textbf{type error}. +Types in programming languages represent the format of information. Figure \ref{fig:simpleTypes} illustrates types with an imaginary representation of their shape and Figure \ref{fig:functions} shows how types can be used to restrain which data can be plumbered into and from a function. In the latter image, function \textit{lessThan10} has the type signature \texttt{Int -> Bool}, meaning that it accepts \texttt{Int} data as input and produces \texttt{Bool} data as the output. These types are used to make constratins and add a safety layer in compile time, given that using data with different types as input, e.g, \texttt{Char} or \texttt{Double}, is regarded as a \textit{type error}. \begin{figure}[ht!] \centering @@ -59,9 +59,9 @@ Types in programming languages represent the format of information. Figure \ref{ \end{minipage} \end{figure} -Primitive types, e.g., \texttt{Int}, \texttt{Double} and \texttt{Char}, can be \textbf{composed} to create more powerful data types, capable of modeling complicated data structures. In this context, composition means binding or gluing existent types together to create more sophisticated abstractions, such as recursive structures and records of information. Two \textbf{algebraic data types} are the type composition mechanism provided by Haskell to bind existent types together. +Primitive types, e.g., \texttt{Int}, \texttt{Double} and \texttt{Char}, can be \textit{composed} to create more powerful data types, capable of modeling complicated data structures. In this context, composition means binding or gluing existent types together to create more sophisticated abstractions, such as recursive structures and records of information. Two \textit{algebraic data types} are the type composition mechanism provided by Haskell to bind existent types together. -The sum type, also known as tagged union in type theory, is an algebraic data type that introduces \textbf{choice} across multiple options using a single label. For instance, a type named \texttt{Parity} can represent the parity of a natural number. It has two options or representatives: \texttt{Even} \textbf{or} \texttt{Odd}, where these are mutually exclusive. When using this type either of them will be of type \texttt{Parity}. A given sum type can have any number of representatives, but only one of them can be used at a given moment. Figure \ref{fig:sumType} depicts examples of sum types with their syntax in the language, in which a given entry of the type can only assume one of the available possibilities. Another use case depicted in the image is the type \texttt{DigitalStates}, which describes the possible states in digital circuits as one of three options: \texttt{High}, \texttt{Low} and \texttt{Z}. +The sum type, also known as tagged union in type theory, is an algebraic data type that introduces \textit{choice} across multiple options using a single label. For instance, a type named \texttt{Parity} can represent the parity of a natural number. It has two options or representatives: \texttt{Even} \textit{or} \texttt{Odd}, where these are mutually exclusive. When using this type either of them will be of type \texttt{Parity}. A given sum type can have any number of representatives, but only one of them can be used at a given moment. Figure \ref{fig:sumType} depicts examples of sum types with their syntax in the language, in which a given entry of the type can only assume one of the available possibilities. Another use case depicted in the image is the type \texttt{DigitalStates}, which describes the possible states in digital circuits as one of three options: \texttt{High}, \texttt{Low} and \texttt{Z}. \begin{figure}[ht!] \centering @@ -81,7 +81,7 @@ The sum type, also known as tagged union in type theory, is an algebraic data ty \label{fig:sumType} \end{figure} -The second type composition mechanism available is the product type, which \textbf{combines} using a type constructor. While the sum type adds choice in the language, this data type requires multiple types to assemble a new one in a mutually inclusive manner. For example, a digital clock composed by two numbers, hours and minutes, can be portrayed by the type \texttt{ClockTime}, which is a combination of two separate numbers combined by the wrapper \texttt{Time}. In order to have any possible time, it is necessary to provide \textbf{both} parts. Effectively, the product type executes a cartesian product with its parts. Figure \ref{fig:productType} illustrates the syntax used in Haskell to create product types as well as another example of combined data, the type \texttt{SpacePosition}. It represents spatial position in three dimensional space, combining spatial coordinates in a single place. +The second type composition mechanism available is the product type, which \textit{combines} using a type constructor. While the sum type adds choice in the language, this data type requires multiple types to assemble a new one in a mutually inclusive manner. For example, a digital clock composed by two numbers, hours and minutes, can be portrayed by the type \texttt{ClockTime}, which is a combination of two separate numbers combined by the wrapper \texttt{Time}. In order to have any possible time, it is necessary to provide \textit{both} parts. Effectively, the product type executes a cartesian product with its parts. Figure \ref{fig:productType} illustrates the syntax used in Haskell to create product types as well as another example of combined data, the type \texttt{SpacePosition}. It represents spatial position in three dimensional space, combining spatial coordinates in a single place. \begin{figure}[ht!] \centering @@ -101,11 +101,11 @@ The second type composition mechanism available is the product type, which \text \centering \includegraphics[width=0.95\linewidth]{MastersThesis/img/ProductType} \end{minipage} -\caption{Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textbf{record} alongside with the constructor, where the labels for each member inside it are explicit.} +\caption{Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textit{record} alongside with the constructor, where the labels for each member inside it are explicit.} \label{fig:productType} \end{figure} -Within algebraic data types, it is possible to abstract the \textbf{structure} out, meaning that the outer shell of the type can be understood as a common pattern changing only the internal content. For instance, if a given application can take advantage of integer values but want to use the same configuration as the one presented in the \texttt{SpacePosition} data type, it's possible to add this customization. This feature is known as \textit{parametric polymorphism}, a powerful tool available in Haskell's type system. An example is presented in Figure \ref{fig:parametricPoly} using the \texttt{SpacePosition} type structure, where its internal types are being parametrized, thus allowing the use of other types internally, such as \texttt{Float}, \texttt{Int} and \texttt{Double}. +Within algebraic data types, it is possible to abstract the \textit{structure} out, meaning that the outer shell of the type can be understood as a common pattern changing only the internal content. For instance, if a given application can take advantage of integer values but want to use the same configuration as the one presented in the \texttt{SpacePosition} data type, it's possible to add this customization. This feature is known as \textit{parametric polymorphism}, a powerful tool available in Haskell's type system. An example is presented in Figure \ref{fig:parametricPoly} using the \texttt{SpacePosition} type structure, where its internal types are being parametrized, thus allowing the use of other types internally, such as \texttt{Float}, \texttt{Int} and \texttt{Double}. \begin{figure}[ht!] \centering @@ -127,7 +127,7 @@ Within algebraic data types, it is possible to abstract the \textbf{structure} o \label{fig:parametricPoly} \end{figure} -In some situations, changing the type of the structure is not the desired property of interest. There are applications where some sort of \textbf{behaviour} is a necessity, e.g., the ability of comparing two instances of a custom type. This nature of polymorphism is known as \textit{ad hoc polymorphism}, which is implemented in Haskell via what is similar to java-like interfaces, so-called \textbf{typeclasses}~\cite{Wadler1989}. However, establishing a contract with a typeclass differs from an interface in a fundamental apurespect: rather than inheritance being given to the type, it has a lawful implementation, meaning that \textbf{mathematical formalism} is assured for it, although the implementer is not obligated to prove its laws on a language level. As an example, the implementation of the typeclass \texttt{Eq} gives to the type all comparable operations ($==$ and $!=$). Figure \ref{fig:adHocPoly} shows the implementation of \texttt{Ord} typeclass for the presented \texttt{ClockTime}, giving it capabilities for sorting instances of such type. +In some situations, changing the type of the structure is not the desired property of interest. There are applications where some sort of \textit{behaviour} is a necessity, e.g., the ability of comparing two instances of a custom type. This nature of polymorphism is known as \textit{ad hoc polymorphism}, which is implemented in Haskell via what is similar to java-like interfaces, so-called \textit{typeclasses}~\cite{Wadler1989}. However, establishing a contract with a typeclass differs from an interface in a fundamental apurespect: rather than inheritance being given to the type, it has a lawful implementation, meaning that \textit{mathematical formalism} is assured for it, although the implementer is not obligated to prove its laws on a language level. As an example, the implementation of the typeclass \texttt{Eq} gives to the type all comparable operations ($==$ and $!=$). Figure \ref{fig:adHocPoly} shows the implementation of \texttt{Ord} typeclass for the presented \texttt{ClockTime}, giving it capabilities for sorting instances of such type. \begin{figure}[ht!] \centering @@ -150,16 +150,16 @@ In some situations, changing the type of the structure is not the desired proper \label{fig:adHocPoly} \end{figure} -Algebraic data types, when combined with polymorphism, are a powerful tool in programming, being a useful way to model the domain of interest. However, both sum and product types cannot portray by themselves the intuition of a \textbf{procedure}. A data transformation process, as showed in Figure \ref{fig:functions}, can be utilized in a variety of different ways. Imagine, for instance, a system where validation can vary according to the current situation. Any validation algorithm would be using the same data, such as a record called \texttt{SystemData}, and returning a boolean as the result of the validation, but the internals of these functions would be totally different. This is represented in Figure \ref{fig:pipeline}. In Haskell, this motivates the use of functions as \textbf{first class citizens}, meaning that they are values and can be treated equally in comparison with data types that carries information, such as being used as arguments to another functions, so-called high order functions. +Algebraic data types, when combined with polymorphism, are a powerful tool in programming, being a useful way to model the domain of interest. However, both sum and product types cannot portray by themselves the intuition of a \textit{procedure}. A data transformation process, as showed in Figure \ref{fig:functions}, can be utilized in a variety of different ways. Imagine, for instance, a system where validation can vary according to the current situation. Any validation algorithm would be using the same data, such as a record called \texttt{SystemData}, and returning a boolean as the result of the validation, but the internals of these functions would be totally different. This is represented in Figure \ref{fig:pipeline}. In Haskell, this motivates the use of functions as \textit{first class citizens}, meaning that they are values and can be treated equally in comparison with data types that carries information, such as being used as arguments to another functions, so-called high order functions. \figuraBib{Pipeline}{Replacements for the validation function within a pipeline like the above is common}{}{fig:pipeline}{width=.75\textwidth}% \section{Modeling Reality} \label{sec:diff} -The continuous time problem explained in the introduction was initially addressed by mathematics, which represents physical quantities by \textbf{differential equations}. This set of equations establishes a relationship between functions and their repurespective derivatives; the function express the variable of interest and its derivative describe how it changes over time. It is common in the engineering and in the physics domain to know the rate of change of a given variable, but the function itself is still unknown. These variables describe the state of the system, e.g, velocity, water flow, electrical current, etc. When those variables are allowed to vary continuously --- in arbitrarily small increments --- differential equations arise as the standard tool to describe them. +The continuous time problem explained in the introduction was initially addressed by mathematics, which represents physical quantities by \textit{differential equations}. This set of equations establishes a relationship between functions and their repurespective derivatives; the function express the variable of interest and its derivative describe how it changes over time. It is common in the engineering and in the physics domain to know the rate of change of a given variable, but the function itself is still unknown. These variables describe the state of the system, e.g, velocity, water flow, electrical current, etc. When those variables are allowed to vary continuously --- in arbitrarily small increments --- differential equations arise as the standard tool to describe them. -While some differential equations have more than one independent variable per function, being classified as a \textbf{partial differential equation}, some phenomena can be modeled with only one independent variable per function in a given set, being described as a set of \textbf{ordinary differential equations}. However, because the majority of such equations does not have an analytical solution, i.e., cannot be described as a combination of other analytical formulas, numerical procedures are used to solve the system. These mechanisms \textbf{quantize} the physical time duration into an interval of numbers, each spaced by a \textbf{time step} from the other, and the sequence starts from an \textbf{initial value}. Afterward, the derivative is used to calculate the slope or the direction in which the tangent of the function is moving in time in order to predict the value of the next step, i.e., determine which point better represents the function in the next time step. The order of the method varies its precision during the prediction of the steps, e.g, the Runge-Kutta method of 4th order is more precise than the Euler method or the Runge-Kutta of 2nd order. +While some differential equations have more than one independent variable per function, being classified as a \textit{partial differential equation}, some phenomena can be modeled with only one independent variable per function in a given set, being described as a set of \textit{ordinary differential equations}. However, because the majority of such equations does not have an analytical solution, i.e., cannot be described as a combination of other analytical formulas, numerical procedures are used to solve the system. These mechanisms \textit{quantize} the physical time duration into an interval of numbers, each spaced by a \textit{time step} from the other, and the sequence starts from an \textit{initial value}. Afterward, the derivative is used to calculate the slope or the direction in which the tangent of the function is moving in time in order to predict the value of the next step, i.e., determine which point better represents the function in the next time step. The order of the method varies its precision during the prediction of the steps, e.g, the Runge-Kutta method of 4th order is more precise than the Euler method or the Runge-Kutta of 2nd order. These numerical methods are used to solve problems purespecified by the following mathematical relations: @@ -168,7 +168,7 @@ These numerical methods are used to solve problems purespecified by the followin \label{eq:diffEq} \end{equation} -As showed, both the derivative and the function --- the mathematical formulation of the system --- varies according to \textbf{time}. Both acts as functions in which for a given time value, it produces a numerical outcome. Moreover, this equality assumes that the next step following the derivative's direction will not be that different from the actual value of the function $y$ if the time step is small enough. Further, it is assumed that in case of a small enough time step, the difference between time samples is $h$, i.e., the time step. In order to model this mathematical relationship between the functions and its repurespective derivative, these methods use iteration-based approximations. For instance, the following equation represents one step of the first-order Euler method, the simplest numerical method: +As showed, both the derivative and the function --- the mathematical formulation of the system --- varies according to \textit{time}. Both acts as functions in which for a given time value, it produces a numerical outcome. Moreover, this equality assumes that the next step following the derivative's direction will not be that different from the actual value of the function $y$ if the time step is small enough. Further, it is assumed that in case of a small enough time step, the difference between time samples is $h$, i.e., the time step. In order to model this mathematical relationship between the functions and its repurespective derivative, these methods use iteration-based approximations. For instance, the following equation represents one step of the first-order Euler method, the simplest numerical method: \begin{equation} y_{n + 1} = y_n + hf(t_n, y_n) @@ -211,11 +211,11 @@ Any representation of a physical system that can be modeled by a set of differen \centering \includegraphics[width=0.95\linewidth]{MastersThesis/img/SimpleDynamics} \end{minipage} -\caption{In Haskell, the \texttt{type} keyword works for alias. The first draft of the \texttt{CT} type is a \textbf{function}, in which providing a floating point value as time returns another value as outcome.} +\caption{In Haskell, the \texttt{type} keyword works for alias. The first draft of the \texttt{CT} type is a \textit{function}, in which providing a floating point value as time returns another value as outcome.} \label{fig:firstDynamics} \end{figure} -This type seems to capture the concept, whilst being compatible with the definition of a tagged system presented by Lee and Sangiovanni~\cite{LeeSangiovanni}. However, because numerical methods assume that the time variable is \textbf{discrete}, i.e., it is in the form of \textbf{iterations} that they solve differential equations. Thus, some tweaks to this type are needed, such as the number of the current iteration, which method is being used, in which stage the method is and when the final time of the simulation will be reached. With this in mind, new types are introduced. Figure \ref{fig:dynamicsAux} shows the auxiliary types to build a new version of the \texttt{CT} type. +This type seems to capture the concept, whilst being compatible with the definition of a tagged system presented by Lee and Sangiovanni~\cite{LeeSangiovanni}. However, because numerical methods assume that the time variable is \textit{discrete}, i.e., it is in the form of \textit{iterations} that they solve differential equations. Thus, some tweaks to this type are needed, such as the number of the current iteration, which method is being used, in which stage the method is and when the final time of the simulation will be reached. With this in mind, new types are introduced. Figure \ref{fig:dynamicsAux} shows the auxiliary types to build a new version of the \texttt{CT} type. \ignore{ \begin{code} diff --git a/doc/MastersThesis/Lhs/Enlightenment.lhs b/doc/MastersThesis/Lhs/Enlightenment.lhs index 1def305..60beea2 100644 --- a/doc/MastersThesis/Lhs/Enlightenment.lhs +++ b/doc/MastersThesis/Lhs/Enlightenment.lhs @@ -38,10 +38,10 @@ Previously, we presented in detail the latter core type of the implementation, t \section{From Models to Models} -Systems of differential equations reside in the mathematical domain. In order to \textbf{execute} using the \texttt{FACT} DSL, this model needs to be converted into an executable model following the DSL's guidelines. Further, we saw that these requirements resemble FF-GPAC's description of its basic units and rules of composition. Thus, the mappings between these worlds need to be established. Chapters 2 and 3 explained the mapping between \texttt{FACT} and FF-GPAC. It remains to map the \textit{semantics} of the mathematical world to the \textit{operational} world of \texttt{FACT}. This mapping goes as the following: +Systems of differential equations reside in the mathematical domain. In order to \textit{execute} using the \texttt{FACT} DSL, this model needs to be converted into an executable model following the DSL's guidelines. Further, we saw that these requirements resemble FF-GPAC's description of its basic units and rules of composition. Thus, the mappings between these worlds need to be established. Chapters 2 and 3 explained the mapping between \texttt{FACT} and FF-GPAC. It remains to map the \textit{semantics} of the mathematical world to the \textit{operational} world of \texttt{FACT}. This mapping goes as the following: \begin{itemize} - \item The relationship between the derivatives and their respective functions will be modeled by \textbf{feedback} loops with \texttt{Integrator} type. + \item The relationship between the derivatives and their respective functions will be modeled by \textit{feedback} loops with \texttt{Integrator} type. \item The initial condition will be modeled by the \texttt{initial} pointer within an integrator. \item Combinational aspects, such as addition and multiplication of constants and the time $t$, will be represented by typeclasses and the \texttt{CT} type. \end{itemize} @@ -73,15 +73,15 @@ $\dot{y} = y + t \quad \quad y(0) = 1$ \figuraBib{Rivika2GPAC}{The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description}{}{fig:rivika2gpac}{width=.8\textwidth}% -In line 5, a record with type \texttt{Integrator} is created, with $1$ being the initial condition of the system. Line 6 creates a \textbf{state variable}, a label that gives us access to the output of an integrator, \texttt{integ} in this case. Afterward, in line 7, the \textit{updateInteg} function connects the inputs to a given integrator by creating a combinational circuit, \texttt{(y + t)}. Polynomial circuits and integrators' outputs can be used as available inputs, as well as the \textit{time} of the simulation. Finally, line 8 returns the state variable as the output for the \textbf{driver}, the main topic of the next section. +In line 5, a record with type \texttt{Integrator} is created, with $1$ being the initial condition of the system. Line 6 creates a \textit{state variable}, a label that gives us access to the output of an integrator, \texttt{integ} in this case. Afterward, in line 7, the \textit{updateInteg} function connects the inputs to a given integrator by creating a combinational circuit, \texttt{(y + t)}. Polynomial circuits and integrators' outputs can be used as available inputs, as well as the \textit{time} of the simulation. Finally, line 8 returns the state variable as the output for the \textit{driver}, the main topic of the next section. -There is, however, an useful improvement to be made into the definition of a model within the DSL. The presented example used only a single state variable, although it is common to have \textbf{multiple} state variables, i.e., multiple integrators interacting with each other, modeling different aspects of a given scenario. Moreover, when dealing with multiple state variables, it is important to maintain \textbf{synchronization} between them, i.e., the same \texttt{Parameters} is being applied to \textbf{all} state variables at the same time. +There is, however, an useful improvement to be made into the definition of a model within the DSL. The presented example used only a single state variable, although it is common to have \textit{multiple} state variables, i.e., multiple integrators interacting with each other, modeling different aspects of a given scenario. Moreover, when dealing with multiple state variables, it is important to maintain \textit{synchronization} between them, i.e., the same \texttt{Parameters} is being applied to \textit{all} state variables at the same time. -To address both of these requirements, we will use the \textit{sequence} function, available in Haskell's standard library. This function manipulates \textbf{nested} structures and change their internal structure. The only requirement is that the outer type have to implement the \texttt{Traversable} typeclass. For instance, applying this function to a list of values of type \texttt{Maybe} would generate a single \texttt{Maybe} value in which its content is a list of the previous content individually wrapped by the \texttt{Maybe} type. This is only possible because the external or "bundler" type, list in this case, has implemented the \texttt{Traversable} typeclass. Figure \ref{fig:sequence} depicts the example before and after applying the function. +To address both of these requirements, we will use the \textit{sequence} function, available in Haskell's standard library. This function manipulates \textit{nested} structures and change their internal structure. The only requirement is that the outer type have to implement the \texttt{Traversable} typeclass. For instance, applying this function to a list of values of type \texttt{Maybe} would generate a single \texttt{Maybe} value in which its content is a list of the previous content individually wrapped by the \texttt{Maybe} type. This is only possible because the external or "bundler" type, list in this case, has implemented the \texttt{Traversable} typeclass. Figure \ref{fig:sequence} depicts the example before and after applying the function. \figuraBib{Sequence}{Because the list implements the \texttt{Traversable} typeclass, it allows this type to use the \textit{traverse} and \textit{sequence} functions, in which both are related to changing the internal behaviour of the nested structures}{}{fig:sequence}{width=.95\textwidth}% -Similarly to the preceding example, the list structure will be used to involve all the state variables with type \texttt{CT Double}. This tweak is effectively creating a \textbf{vector} of state variables whilst sharing the same notion of time across all of them. So, the final type signature of a model is \texttt{CT [Double]} or, by using a type aliases for \texttt{[Double]} as \texttt{Vector}, \texttt{CT Vector}. A second alias can be created to make it more descriptive, as exemplified in Figure \ref{fig:exampleMultiple}: +Similarly to the preceding example, the list structure will be used to involve all the state variables with type \texttt{CT Double}. This tweak is effectively creating a \textit{vector} of state variables whilst sharing the same notion of time across all of them. So, the final type signature of a model is \texttt{CT [Double]} or, by using a type aliases for \texttt{[Double]} as \texttt{Vector}, \texttt{CT Vector}. A second alias can be created to make it more descriptive, as exemplified in Figure \ref{fig:exampleMultiple}: \begin{figure}[ht!] \begin{minipage}{.5\textwidth} @@ -107,7 +107,7 @@ $\dot{x} = y * x \quad \quad x(0) = 1$ $\dot{y} = y + t \quad \quad y(0) = 1$ \end{center} \end{minipage} -\caption{A \textbf{state vector} comprises multiple state variables and requires the use of the \textit{sequence} function to sync time across all variables.} +\caption{A \textit{state vector} comprises multiple state variables and requires the use of the \textit{sequence} function to sync time across all variables.} \label{fig:exampleMultiple} \end{figure} @@ -141,15 +141,15 @@ runCT m t sl = \end{spec} On line 3, we convert the final \textit{time value} for the simulation into an interval value for the simulation (\texttt{iv}) --- the simulation always starts at 0 and goes all the -way up to the requested time. Next up, on line 4, we convert the interval to an \textit{iteration} interval in the format of a tuple, i.e., the continuous interval becomes the tuple $(0, \frac{stopTime - startTime}{timeStep})$, in which the second value of the tuple is \textbf{rounded}. From line 5 to line 11, we are defining an auxiliary function \textit{parameterise}. This function picks a natural number, which represents the iteration +way up to the requested time. Next up, on line 4, we convert the interval to an \textit{iteration} interval in the format of a tuple, i.e., the continuous interval becomes the tuple $(0, \frac{stopTime - startTime}{timeStep})$, in which the second value of the tuple is \textit{rounded}. From line 5 to line 11, we are defining an auxiliary function \textit{parameterise}. This function picks a natural number, which represents the iteration index, and creates a new record with the type \texttt{Parameters}. Additionally, it uses the auxiliary function \textit{iterToTime} (line 7), which converts the iteration number from -the domain of discrete \textbf{steps} to the domain of \textbf{discrete time}, i.e., the time the solver methods can operate with (chapter 5 will explore more of this concept). This conversion is based on the time step being used, as well as which method and in which stage it is for that specific iteration. Finally, line 13 produces the outcome of the \textit{runCT} function. The final result is the output from a function called \textit{map} piped it as an argument for the \textit{sequence} function. +the domain of discrete \textit{steps} to the domain of \textit{discrete time}, i.e., the time the solver methods can operate with (chapter 5 will explore more of this concept). This conversion is based on the time step being used, as well as which method and in which stage it is for that specific iteration. Finally, line 13 produces the outcome of the \textit{runCT} function. The final result is the output from a function called \textit{map} piped it as an argument for the \textit{sequence} function. -The \textit{map} operation is provided by the \texttt{Functor} of the list monad, and it applies an arbitrary function to the internal members of a list in a \textbf{sequential} manner. In this case, the \textit{parameterise} function, composed with the continuous machine \texttt{m}, is the one being mapped. Thus, a custom value of the type \texttt{Parameters} is taking place of each natural natural number in the list, and this is being applied to the received \texttt{CT} value. It produces a list of answers in order, each one wrapped in the \texttt{IO} monad. To abstract out the \texttt{IO}, thus getting \texttt{IO [a]} rather than \texttt{[IO a]}, the \textit{sequence} function finishes the implementation. Additionally, there is an analogous implementation of this function, so-called \textit{runCTFinal}, that return only the final result of the simulation instead of the outputs at the time step samples. +The \textit{map} operation is provided by the \texttt{Functor} of the list monad, and it applies an arbitrary function to the internal members of a list in a \textit{sequential} manner. In this case, the \textit{parameterise} function, composed with the continuous machine \texttt{m}, is the one being mapped. Thus, a custom value of the type \texttt{Parameters} is taking place of each natural natural number in the list, and this is being applied to the received \texttt{CT} value. It produces a list of answers in order, each one wrapped in the \texttt{IO} monad. To abstract out the \texttt{IO}, thus getting \texttt{IO [a]} rather than \texttt{[IO a]}, the \textit{sequence} function finishes the implementation. Additionally, there is an analogous implementation of this function, so-called \textit{runCTFinal}, that return only the final result of the simulation instead of the outputs at the time step samples. \section{An attractive example} -For the example walkthrough, the same example introduced in the chapter \textit{Introduction} will be used in this section. So, we will be solving a system, composed by a set of chaotic solutions, called \textbf{the Lorenz Attractor}. In these types of systems, the ordinary differential equations are used to model chaotic systems, providing solutions based on parameter values and initial conditions. The original differential equations are presented bellow: +For the example walkthrough, the same example introduced in the chapter \textit{Introduction} will be used in this section. So, we will be solving a system, composed by a set of chaotic solutions, called \textit{the Lorenz Attractor}. In these types of systems, the ordinary differential equations are used to model chaotic systems, providing solutions based on parameter values and initial conditions. The original differential equations are presented bellow: $$ \sigma = 10.0 $$ $$ \rho = 28.0 $$ @@ -201,19 +201,19 @@ To understand the model, we need to follow the execution sequence of the output: \figuraBib{ExampleAllocate}{After \textit{createInteg}, this record is the final image of the integrator. The function \textit{initialize} gives us protecting against wrong records of the type \texttt{Parameters}, assuring it begins from the first iteration, i.e., $t_0$}{}{fig:allocateExample}{width=.90\textwidth}% -The next step is the creation of the independent state variable $x$ via \textit{readInteg} function (line 15). This variable will read the computations that are executing under the hood by the integrator. The core idea is to read from the computation pointer inside the integrator and create a new \texttt{CT Double} value. Figure \ref{fig:readExample} portrays this mental image. When reading a value from an integrator, the computation pointer is being used to access the memory region previously allocated. Also, what's being stored in memory is a \texttt{CT Double} value. The state variable, $x$ in this case, combines its received \texttt{Parameters} value, so-called \texttt{ps}, and \textbf{applies} it to the stored continuous machine. The result \texttt{v} is then returned. +The next step is the creation of the independent state variable $x$ via \textit{readInteg} function (line 15). This variable will read the computations that are executing under the hood by the integrator. The core idea is to read from the computation pointer inside the integrator and create a new \texttt{CT Double} value. Figure \ref{fig:readExample} portrays this mental image. When reading a value from an integrator, the computation pointer is being used to access the memory region previously allocated. Also, what's being stored in memory is a \texttt{CT Double} value. The state variable, $x$ in this case, combines its received \texttt{Parameters} value, so-called \texttt{ps}, and \textit{applies} it to the stored continuous machine. The result \texttt{v} is then returned. \figuraBib{ExampleRead}{After \textit{readInteg}, the final floating point values is obtained by reading from memory a computation and passing to it the received parameters record. The result of this application, $v$, is the returned value}{}{fig:readExample}{width=.90\textwidth}% -The final step is to \textbf{change} the computation \textbf{inside} the memory region (line 18). Until this moment, the stored computation is always returning the value of the system at $t_0$, whilst changing the obtained parameters record to be correct via the \textit{initialize} function. Our goal is to modify this behaviour to the actual solution of the differential equations via using numerical methods, i.e., using the solver of the simulation. The function \textit{updateInteg} fulfills this role and its functionality is illustrated in Figure \ref{fig:changeExample}. With the integrator \texttt{integX} and the differential equation $\sigma (y - x)$ on hand, this function picks the provided parametric record \texttt{ps} and it returns the result of a step of the solver \texttt{RK2}, second-order Runge-Kutta method in this case. Additionally, the solver method receives as a dependency what is being pointed by the \texttt{computation} pointer, represented by \texttt{c} in the image, alongside the differential equation and initial value, pictured by \texttt{d} and \texttt{i} respectively. +The final step is to \textit{change} the computation \textit{inside} the memory region (line 18). Until this moment, the stored computation is always returning the value of the system at $t_0$, whilst changing the obtained parameters record to be correct via the \textit{initialize} function. Our goal is to modify this behaviour to the actual solution of the differential equations via using numerical methods, i.e., using the solver of the simulation. The function \textit{updateInteg} fulfills this role and its functionality is illustrated in Figure \ref{fig:changeExample}. With the integrator \texttt{integX} and the differential equation $\sigma (y - x)$ on hand, this function picks the provided parametric record \texttt{ps} and it returns the result of a step of the solver \texttt{RK2}, second-order Runge-Kutta method in this case. Additionally, the solver method receives as a dependency what is being pointed by the \texttt{computation} pointer, represented by \texttt{c} in the image, alongside the differential equation and initial value, pictured by \texttt{d} and \texttt{i} respectively. \figuraBib{ExampleChange}{The \textit{updateInteg} function only does side effects, meaning that only affects memory. The internal variable \texttt{c} is a pointer to the computation \textit{itself}, i.e., the computation being created references this exact procedure}{}{fig:changeExample}{width=.90\textwidth}% \figuraBib{ExampleFinalModel}{After setting up the environment, this is the final depiction of an independent variable. The reader $x$ reads the values computed by the procedure stored in memory, a second-order Runge-Kutta method in this case}{}{fig:finalModelExample}{width=.90\textwidth}% -Figure \ref{fig:finalModelExample} shows the final image for state variable $x$ after until this point in the execution. Lastly, the state variable is wrapped inside a list and it is applied to the \textit{sequence} function, as explained in the previous section. This means that the list of variable(s) in the model, with the signature \texttt{[CT Double]}, is transformed into a value with the type \texttt{CT [Double]}. The transformation can be visually understood when looking at Figure \ref{fig:finalModelExample}. Instead of picking one \texttt{ps} of type \texttt{Parameters} and returning a value \textit{v}, the same parametric record returns a \textbf{list} of values, with the \textbf{same} parametric dependency being applied to all state variables inside $[x, y, z]$. +Figure \ref{fig:finalModelExample} shows the final image for state variable $x$ after until this point in the execution. Lastly, the state variable is wrapped inside a list and it is applied to the \textit{sequence} function, as explained in the previous section. This means that the list of variable(s) in the model, with the signature \texttt{[CT Double]}, is transformed into a value with the type \texttt{CT [Double]}. The transformation can be visually understood when looking at Figure \ref{fig:finalModelExample}. Instead of picking one \texttt{ps} of type \texttt{Parameters} and returning a value \textit{v}, the same parametric record returns a \textit{list} of values, with the \textit{same} parametric dependency being applied to all state variables inside $[x, y, z]$. -However, this only addresses \textbf{how} the driver triggers the entire execution, but does \textbf{not} explain how the differential equations are actually being calculated with the \texttt{RK2} numerical method. This is done by the solver functions (\textit{integEuler}, \textit{integRK2} and \textit{integRK4}) and those are all based on equation \ref{eq:solverEquation} regardless of the chosen method. The equation goes as the following: +However, this only addresses \textit{how} the driver triggers the entire execution, but does \textit{not} explain how the differential equations are actually being calculated with the \texttt{RK2} numerical method. This is done by the solver functions (\textit{integEuler}, \textit{integRK2} and \textit{integRK4}) and those are all based on equation \ref{eq:solverEquation} regardless of the chosen method. The equation goes as the following: $$y_{n+1} = y_n + hf(t_n,y_n) \rightarrow y_n = y_{n-1} + hf(t_{n-1}, y_{n-1})$$ @@ -225,7 +225,7 @@ The equation above makes the dependencies in the \texttt{RK2} example in Figure \item \texttt{i} and \texttt{c} $\Rightarrow$ The initial value of the system, as well as a solver step function, will be used to calculate the previous iteration result ($y_{n-1}$). \end{itemize} -It is worth mentioning that the dependency \texttt{c} is a call of a \textbf{solver step}, meaning that it is capable of calculating the previous step $y_{n-1}$. This is accomplished in a \textbf{recursive} manner, since for every iteration the previous one is necessary. When the base case is achieved, by calculating the value at the first iteration using the \texttt{i} dependency, the recursion stops and the process folds, getting the final result for the iteration that has started the chain. This is the same pattern across all the implemented solvers (\texttt{Euler}, \texttt{RungeKutta2} and \texttt{RungeKutta4}). +It is worth mentioning that the dependency \texttt{c} is a call of a \textit{solver step}, meaning that it is capable of calculating the previous step $y_{n-1}$. This is accomplished in a \textit{recursive} manner, since for every iteration the previous one is necessary. When the base case is achieved, by calculating the value at the first iteration using the \texttt{i} dependency, the recursion stops and the process folds, getting the final result for the iteration that has started the chain. This is the same pattern across all the implemented solvers (\texttt{Euler}, \texttt{RungeKutta2} and \texttt{RungeKutta4}). \section{Lorenz's Butterfly} diff --git a/doc/MastersThesis/Lhs/Fixing.lhs b/doc/MastersThesis/Lhs/Fixing.lhs index 61f1e85..ee26aa2 100644 --- a/doc/MastersThesis/Lhs/Fixing.lhs +++ b/doc/MastersThesis/Lhs/Fixing.lhs @@ -21,6 +21,10 @@ it leaks noise into the designer's mind. The designer's concern should be to pay step of translation or noisy setups just adds an extra burden with no real gains on the engineering of simulating continuous time. This chapter will present \textit{FFACT}, an evolution of FACT which aims to reduce the noise even further. +It is worth noting that the term \textit{fixed-point} has different meanings in the domains of engineering and mathematics. When refericing the +fractional representations within a computer, one may use the \textit{fixed-point method}. Thus, to avoid confusion, section~\ref{subsec:fix} starts +by defining the term as a mathematical combinator that can be used to implement recursion. + \section{Integrator's Noise} Chapter 4, \textit{Execution Walkthrough}, described the semantics and usability on an example of a system in mathematical specification @@ -137,7 +141,7 @@ For readers unfamiliar with the use of this combinator, equational reasoning~\ci ... \end{lstlisting} -We left as exercise for the reader to check that the result of this process will yield the factorial of 5, i.e., 120. +The result of this process will yield the factorial of 5, i.e., 120. When using \texttt{fix} to define recursive processes, the function being \emph{applied} to it must be the one defining the convergence criteria for the iterative process of looking for the fixed-point. In our factorial case, this is done via the conditional check at the beginning of body of the lambda. The fixed point combinator's responsibility is to keep the \emph{repetition} process going -- something that may diverge and run out of computer resources. @@ -191,7 +195,7 @@ The former case, however, needs a special kind of recursion, so-called \emph{val As we are about to understand on Section~\ref{sec:ffact}, the use of value recursion to have monadic's bindings with the same convenience of \texttt{letrec} will be the key to our improvement on FFACT over FACT. Fundamentally, it will \emph{tie the recursion knot} done in FACT via the complicated implicit recursion mentioned in Section~\ref{sec:integrator}. -In terms of implementation, this is being achieved by the use of the \texttt{mfix} construct~\cite{levent2000}, which is accompained by a \emph{recursive do} syntax sugar~\cite{levent2002}, with the caveat of not being able to do shadowing -- much like the \texttt{let} and \texttt{where} constructs in Haskell. +In terms of implementation, this is being achieved by the use of the \texttt{mfix} construct~\cite{levent2000}, which is accompained by a \emph{recursive do} syntax sugar~\cite{levent2002}, with the caveat of not being able to do shadowing -- much like the \texttt{let} and \texttt{where} clauses in Haskell. In order for a type to be able to use this construct, it should follow specific algebraic laws~\cite{leventThesis} to then implement the \texttt{MonadFix} type class found in \texttt{Control.Monad.Fix}~\footnote{\texttt{Control.Monad.Fix} \href{https://hackage.haskell.org/package/base-4.21.0.0/docs/Control-Monad-Fix.html}{\textcolor{blue}{hackage documentation}}.} package: % %% \vspace{-0.8cm} @@ -324,7 +328,8 @@ lorenzSystem = runCT lorenzModel 100 lorenzSolver \end{code} Not surprisingly, the results of this new approach using the monadic fixed-point combinator are very similar to the -performance metrics depicted in chapter 6, \textit{Caching the Speed Pill}. Figure~\ref{fig:fixed-graph} shows the new results: +performance metrics depicted in chapter 6, \textit{Caching the Speed Pill} --- indicating that we are \textit{not} trading performance +for a gain in conciseness. Figure~\ref{fig:fixed-graph} shows the new results: \figuraBib{Graph3}{Results of FFACT are similar to the final version of FACT.}{}{fig:fixed-graph}{width=.97\textwidth}% diff --git a/doc/MastersThesis/Lhs/Implementation.lhs b/doc/MastersThesis/Lhs/Implementation.lhs index 87df9b2..9c98ab4 100644 --- a/doc/MastersThesis/Lhs/Implementation.lhs +++ b/doc/MastersThesis/Lhs/Implementation.lhs @@ -14,16 +14,16 @@ This chapter details the next steps to simulate continuous-time behaviours. It s \section{Uplifting the CT Type} \label{sec:typeclasses} -The \texttt{CT} type needs \textbf{algebraic operations} to be better manipulated, i.e., useful operations that can be applied to the type preserving its external structure. These procedures are algebraic laws or properties that enhance the capabilities of the proposed function type wrapped by a \texttt{CT} shell. Towards this goal, a few typeclasses need to be implemented. +The \texttt{CT} type needs \textit{algebraic operations} to be better manipulated, i.e., useful operations that can be applied to the type preserving its external structure. These procedures are algebraic laws or properties that enhance the capabilities of the proposed function type wrapped by a \texttt{CT} shell. Towards this goal, a few typeclasses need to be implemented. Across the spectrum of available typeclasses in Haskell, we are interested in the ones that allow data manipulation with a single or multiple \texttt{CT} and provide mathematical operations. To address the former group of operations, the typeclasses \texttt{Functor}, \texttt{Applicative}, \texttt{Monad} and \texttt{MonadIO} will be implemented. The later group of properties is dedicated to provide mathematical operations, such as $+$ and $\times$, and it can be acquired by implementing the typeclasses \texttt{Num}, \texttt{Fractional}, and \texttt{Floating}. -The typeclasses \texttt{Functor}, \texttt{Applicative} and \texttt{Monad} are all \textbf{lifting} operations, meaning that they allow functions to be lifted or involved by the chosen type. While they differ \textbf{which} functions will be lifted, i.e., each one of them lift a function with a different type signature, they share the intuition that these functions will be interacting with the \texttt{CT} type. This perspective is crucial for a practical understanding of these patterns. A function with a certain \textbf{shape} and details will be lifted using one of those typeclasses and their respective operators. +The typeclasses \texttt{Functor}, \texttt{Applicative} and \texttt{Monad} are all \textit{lifting} operations, meaning that they allow functions to be lifted or involved by the chosen type. While they differ \textit{which} functions will be lifted, i.e., each one of them lift a function with a different type signature, they share the intuition that these functions will be interacting with the \texttt{CT} type. This perspective is crucial for a practical understanding of these patterns. A function with a certain \textit{shape} and details will be lifted using one of those typeclasses and their respective operators. Given that the \texttt{CT} type is just a type alias with \texttt{ReaderT} as the under the hood type, all of these lift operations are already provided in Haskell's libraries. However, it is still valuable to present their implementation to completely understand how the final look for the DSL will look like. Hence, the following implementations will -assume we \textbf{aren't} use CT as the type alias and instead we will be showing the implementations as if we are using the definition used previously~\cite{Lemos2022} for the +assume we \textit{aren't} use CT as the type alias and instead we will be showing the implementations as if we are using the definition used previously~\cite{Lemos2022} for the \texttt{CT} type: \begin{purespec} @@ -46,7 +46,7 @@ instance Functor CT where \label{fig:functor} \end{figure} -The next typeclass, \texttt{Applicative}, deals with functions that are inside the \texttt{CT} type. When implemented (again, referring to the non-type-alias version), this algebraic operation lifts this internal function, wrapped by the type of choice, applying the \textbf{external} type to its \textbf{internal} members, thus generating again a function with the signature \texttt{CT a -> CT b}. The minimum requirements for this typeclass is the function \textit{pure}, a function responsible for wrapping any value with the \texttt{CT} wrapper, and the \texttt{<*>} operator, which does the aforementioned interaction between the internal values with the outer shell. The implementation of this typeclass is presented in the code bellow, in which the dependency \texttt{df} has the signature \texttt{CT (a -> b)} and its internal function \texttt{a -> b} is being lifted to the \texttt{CT} type. Figure \ref{fig:applicative} illustrates the described lifting with \texttt{Applicative}. +The next typeclass, \texttt{Applicative}, deals with functions that are inside the \texttt{CT} type. When implemented (again, referring to the non-type-alias version), this algebraic operation lifts this internal function, wrapped by the type of choice, applying the \textit{external} type to its \textit{internal} members, thus generating again a function with the signature \texttt{CT a -> CT b}. The minimum requirements for this typeclass is the function \textit{pure}, a function responsible for wrapping any value with the \texttt{CT} wrapper, and the \texttt{<*>} operator, which does the aforementioned interaction between the internal values with the outer shell. The implementation of this typeclass is presented in the code bellow, in which the dependency \texttt{df} has the signature \texttt{CT (a -> b)} and its internal function \texttt{a -> b} is being lifted to the \texttt{CT} type. Figure \ref{fig:applicative} illustrates the described lifting with \texttt{Applicative}. \begin{figure}[ht!] \begin{minipage}{.55\textwidth} @@ -72,7 +72,7 @@ appComposition (CT df) (CT da) \label{fig:applicative} \end{figure} -The third and final lifting is the \texttt{Monad} typeclass. In this case, the function being lifted \textbf{generates} structure as the outcome, although its dependency is a pure value. As Figure \ref{fig:monad} portrays, a function with the signature \texttt{a -> CT b} can be lifted to the signature \texttt{CT a -> CT b} by using the \texttt{Monad} typeclass. This new operation for lifting, so-called \textit{bind}, is written below, alongside the \textit{return} function, which is the same \textit{pure} function from the \texttt{Applicative} typeclass. Together, these two functions represent the minimum requirements of the \texttt{Monad} typeclass. Figure \ref{fig:monad} illustrates the aforementioned scenario. +The third and final lifting is the \texttt{Monad} typeclass. In this case, the function being lifted \textit{generates} structure as the outcome, although its dependency is a pure value. As Figure \ref{fig:monad} portrays, a function with the signature \texttt{a -> CT b} can be lifted to the signature \texttt{CT a -> CT b} by using the \texttt{Monad} typeclass. This new operation for lifting, so-called \textit{bind}, is written below, alongside the \textit{return} function, which is the same \textit{pure} function from the \texttt{Applicative} typeclass. Together, these two functions represent the minimum requirements of the \texttt{Monad} typeclass. Figure \ref{fig:monad} illustrates the aforementioned scenario. \begin{figure}[ht!] \begin{minipage}{.55\textwidth} @@ -143,7 +143,7 @@ Arithmetic basic units, such as the \texttt{Adder Unit} and the \texttt{Multipli \section{Exploiting Impurity} \label{sec:integrator} -The \texttt{CT} type directly interacts with a second type that intensively explores \textbf{side effects}. The notion of a side effect correlates to changing a \textbf{state}, i.e., if you see a computer program as a state machine, an operation that goes beyond returning a value --- it has an observable interference somewhere else --- is called a side effect operation or an \textbf{impure} functionality. Examples of common use cases goes from modifying memory regions to performing input-output procedures via system-calls. The nature of purity comes from the mathematical domain, in which a function is a procedure that is deterministic, meaning that the output value is always the same if the same input is provided --- a false assumption when programming with side effects. An example of an imaginary state machine can be viewed in Figure \ref{fig:stateMachine}. +The \texttt{CT} type directly interacts with a second type that intensively explores \textit{side effects}. The notion of a side effect correlates to changing a \textit{state}, i.e., if you see a computer program as a state machine, an operation that goes beyond returning a value --- it has an observable interference somewhere else --- is called a side effect operation or an \textit{impure} functionality. Examples of common use cases goes from modifying memory regions to performing input-output procedures via system-calls. The nature of purity comes from the mathematical domain, in which a function is a procedure that is deterministic, meaning that the output value is always the same if the same input is provided --- a false assumption when programming with side effects. An example of an imaginary state machine can be viewed in Figure \ref{fig:stateMachine}. \begin{figure}[ht!] \begin{minipage}[c]{0.67\textwidth} @@ -155,11 +155,11 @@ The \texttt{CT} type directly interacts with a second type that intensively expl \end{minipage} \end{figure} -In low-level and imperative languages, such as C, Fortran, Zig, Rust, impurity is present across the program and can be easily and naturally added via \textbf{pointers} --- addresses to memory regions where values, or even other pointers, can be stored. In contrast, functional programming languages advocate to a more explicit use of such aspect, given that it prioritizes pure and mathematical functions instead of allowing the developer to mix these two facets. So, the feature is still available but the developer has to take extra effort to add an effectful function into the program, clearly separating these two different styles of programming. +In low-level and imperative languages, such as C, Fortran, Zig, Rust, impurity is present across the program and can be easily and naturally added via \textit{pointers} --- addresses to memory regions where values, or even other pointers, can be stored. In contrast, functional programming languages advocate to a more explicit use of such aspect, given that it prioritizes pure and mathematical functions instead of allowing the developer to mix these two facets. So, the feature is still available but the developer has to take extra effort to add an effectful function into the program, clearly separating these two different styles of programming. -The second core type of the present work, the \texttt{Integrator}, is based on this idea of side effect operations, manipulating data directly in memory, always consulting and modifying data in the impure world. Foremost, it represents a differential equation, as explained in chapter 2, \textit{Design Philosophy} section \ref{sec:diff}, meaning that the \texttt{Integrator} type models the calculation of an \textbf{integral}. It accomplishes this task by driving the numerical algorithms of a given solver method, implying that this is where the \textit{operational} semantics of our DSL reside. +The second core type of the present work, the \texttt{Integrator}, is based on this idea of side effect operations, manipulating data directly in memory, always consulting and modifying data in the impure world. Foremost, it represents a differential equation, as explained in chapter 2, \textit{Design Philosophy} section \ref{sec:diff}, meaning that the \texttt{Integrator} type models the calculation of an \textit{integral}. It accomplishes this task by driving the numerical algorithms of a given solver method, implying that this is where the \textit{operational} semantics of our DSL reside. -With this in mind, the \texttt{Integrator} type is responsible for executing a given solver method to calculate a given integral. This type comprises the initial value of the system, i.e., the value of a given function at time $t_0$, and a pointer to a memory region for future use, called \texttt{computation}. In Haskell, something similar to a pointer and memory allocation can be made by using the \texttt{IORef} type. This memory region is being allocated to be used with the type \texttt{CT Double}. Also, the initial value is also represented by \texttt{CT Double}, and the initial condition can be lifted to this type because the typeclass \texttt{Num} is implemented (section \ref{sec:typeclasses}). It is worth noticing that these pointers are pointing to functions or \textbf{computations} and not to double precision values. +With this in mind, the \texttt{Integrator} type is responsible for executing a given solver method to calculate a given integral. This type comprises the initial value of the system, i.e., the value of a given function at time $t_0$, and a pointer to a memory region for future use, called \texttt{computation}. In Haskell, something similar to a pointer and memory allocation can be made by using the \texttt{IORef} type. This memory region is being allocated to be used with the type \texttt{CT Double}. Also, the initial value is also represented by \texttt{CT Double}, and the initial condition can be lifted to this type because the typeclass \texttt{Num} is implemented (section \ref{sec:typeclasses}). It is worth noticing that these pointers are pointing to functions or \textit{computations} and not to double precision values. \begin{purespec} data Integrator = Integrator { initial :: CT Double, @@ -178,7 +178,7 @@ createInteg i = do return integ \end{spec} -The first step to create an integrator is to manage the initial value, which is a function with the type \texttt{Parameters -> IO Double} wrapped in \texttt{CT} via the \texttt{ReaderT}. After acquiring a given initial value \texttt{i}, the integrator needs to assure that any given parameter record is the beginning of the computation process, i.e., it starts from $t_0$. The \texttt{initialize} function (line 3) fulfills this role, doing a reset in \texttt{time}, \texttt{iteration} and \texttt{stage} in a given parameter record. This is necessary because all the implemented solvers presumes \textbf{sequential steps}, starting from the initial condition. So, in order to not allow this error-prone behaviour, the integrator makes sure that the initial state of the system is configured correctly. The next step is to allocate memory to this computation --- a procedure that will get you the initial value, while modifying the parameter record dependency of the function accordingly. +The first step to create an integrator is to manage the initial value, which is a function with the type \texttt{Parameters -> IO Double} wrapped in \texttt{CT} via the \texttt{ReaderT}. After acquiring a given initial value \texttt{i}, the integrator needs to assure that any given parameter record is the beginning of the computation process, i.e., it starts from $t_0$. The \texttt{initialize} function (line 3) fulfills this role, doing a reset in \texttt{time}, \texttt{iteration} and \texttt{stage} in a given parameter record. This is necessary because all the implemented solvers presumes \textit{sequential steps}, starting from the initial condition. So, in order to not allow this error-prone behaviour, the integrator makes sure that the initial state of the system is configured correctly. The next step is to allocate memory to this computation --- a procedure that will get you the initial value, while modifying the parameter record dependency of the function accordingly. The following stage is to do a type conversion, given that in order to create the \texttt{Integrator} record, it is necessary to have the type \texttt{IORef (CT Double)}. At first glance, this can seem to be an issue because the result of the \textit{newIORef} function is wrapped with the \texttt{IO} monad~\footnote{\label{foot:IORef} \texttt{IORef} \href{https://hackage.haskell.org/package/base-4.16.1.0/docs/Data-IORef.html}{\textcolor{blue}{hackage documentation}}.}. This conversion is the reason why the \texttt{IO} monad is being used in the implementation, and hence forced us to implement the typeclass \texttt{MonadIO}. The function \texttt{liftIO} (liine 3) is capable of removing the \texttt{IO} wrapper and adding an arbitrary monad in its place, \texttt{CT} in this case. So, after line 3 the \texttt{comp} value has the desired \texttt{CT} type. The remaining step of this creation process is to construct the integrator itself by building up the record with the correct fields, e.g., the CT version of the initial value and the pointer to the constructed computation written in memory (lines 4 and 5). @@ -189,7 +189,7 @@ readInteg = join . liftIO . readIORef . computation To read the content of this region, it is necessary to provide the integrator to the $readInteg$ function. Its implementation is straightforward: build a new \texttt{CT} that applies the given record of \texttt{Parameters} to what's being stored in the region. This is accomplished by using the function \texttt{join} with the $readIORef$ function~\footref{foot:IORef}. -Finally, the function \textit{updateInteg} is a side-effect-only function that changes \textbf{which computation} will be used by the integrator. It is worth noticing that after the creation of the integrator, the \texttt{computation} pointer is addressing a simple and, initially, useless computation: given an arbitrary record of \texttt{Parameters}, it will fix it to assure it is starting at $t_0$, and it will return the initial value in form of a \texttt{CT Double}. To update this behaviour, the \textit{updateInteg} change the content being pointed by the integrator's pointer: +Finally, the function \textit{updateInteg} is a side-effect-only function that changes \textit{which computation} will be used by the integrator. It is worth noticing that after the creation of the integrator, the \texttt{computation} pointer is addressing a simple and, initially, useless computation: given an arbitrary record of \texttt{Parameters}, it will fix it to assure it is starting at $t_0$, and it will return the initial value in form of a \texttt{CT Double}. To update this behaviour, the \textit{updateInteg} change the content being pointed by the integrator's pointer: \begin{spec} updateInteg :: Integrator -> CT Double -> CT () @@ -208,10 +208,10 @@ updateInteg integ diff = do In the beginning of the function (line 3), we extract the initial value from the integrator, so-called \texttt{i}. Next (line 4 onward), create a new computation, so-called \texttt{z} --- a function wrapped in the \texttt{CT} type that receives a \texttt{Parameters} record and computes the result based on the solving method. Because this computation needs to do lookups on some configuration values, we use the function \texttt{ask} (line 5) from \texttt{ReaderT} to get our environment values; this case -a value of type \texttt{Parameters}. Later on, the follow-up step is to build a copy of the \textbf{same process} being pointed by the \texttt{computation} pointer (line 6). -Finally, after checking the chosen solver (line 7), it is executed one iteration of the process by calling \textit{integEuler}, or \textit{integRK2} or \textit{integRK4}. After line 10, this entire process \texttt{z} is being pointed by the \texttt{computation} pointer, being done by the $writeIORef$ function~\footref{foot:IORef}. It may seem confusing that inside \texttt{z} we are \textbf{reading} what is being pointed and later, on the last line of \textit{updateInteg}, this is being used on the final line to update that same pointer. This is necessary, as it will be explained in the next chapter \textit{Execution Walkthrough}, to allow the use of an \textbf{implicit recursion} to assure the sequential aspect needed by the solvers. For now, the core idea is this: the \textit{updateInteg} function alters the \textbf{future} computations; it rewrites which procedure will be pointed by the \texttt{computation} pointer. This new procedure, which we called \texttt{z}, creates an intermediate computation, \texttt{whatToDo} (line 6), that \textbf{reads} what this pointer is addressing, which is \texttt{z} itself. +a value of type \texttt{Parameters}. Later on, the follow-up step is to build a copy of the \textit{same process} being pointed by the \texttt{computation} pointer (line 6). +Finally, after checking the chosen solver (line 7), it is executed one iteration of the process by calling \textit{integEuler}, or \textit{integRK2} or \textit{integRK4}. After line 10, this entire process \texttt{z} is being pointed by the \texttt{computation} pointer, being done by the $writeIORef$ function~\footref{foot:IORef}. It may seem confusing that inside \texttt{z} we are \textit{reading} what is being pointed and later, on the last line of \textit{updateInteg}, this is being used on the final line to update that same pointer. This is necessary, as it will be explained in the next chapter \textit{Execution Walkthrough}, to allow the use of an \textit{implicit recursion} to assure the sequential aspect needed by the solvers. For now, the core idea is this: the \textit{updateInteg} function alters the \textit{future} computations; it rewrites which procedure will be pointed by the \texttt{computation} pointer. This new procedure, which we called \texttt{z}, creates an intermediate computation, \texttt{whatToDo} (line 6), that \textit{reads} what this pointer is addressing, which is \texttt{z} itself. -Initially, this strange behaviour may cause the idea that this computation will never halt. However, Haskell's \textit{laziness} assures that a given computation will not be computed unless it is necessary to continue execution and this is \textbf{not} the case in the current stage, given that we are just setting the environment in the memory to further calculate the solution of the system. +Initially, this strange behaviour may cause the idea that this computation will never halt. However, Haskell's \textit{laziness} assures that a given computation will not be computed unless it is necessary to continue execution and this is \textit{not} the case in the current stage, given that we are just setting the environment in the memory to further calculate the solution of the system. \section{GPAC Bind II: Integrator} @@ -226,11 +226,11 @@ Lastly, there are the composition rules in FF-GPAC --- constraints that describe \item Each variable of integration of an integrator is the input \textit{t}. \end{enumerate} -The preceding rules include defining connections with polynomial circuits --- an acyclic circuit composed only by constant functions, adders and multipliers. These special circuits are already being modeled in \texttt{FACT} by the \texttt{CT} type with a set of typeclasses, as explained in the previous section about GPAC. The \textbf{integrator functions}, e.g., \textit{readInteg} and \textit{updateInteg}, represent the composition rules. +The preceding rules include defining connections with polynomial circuits --- an acyclic circuit composed only by constant functions, adders and multipliers. These special circuits are already being modeled in \texttt{FACT} by the \texttt{CT} type with a set of typeclasses, as explained in the previous section about GPAC. The \textit{integrator functions}, e.g., \textit{readInteg} and \textit{updateInteg}, represent the composition rules. -Going back to the type signature of the \textit{updateInteg}, \texttt{Integrator -> CT Double -> CT ()}, we can interpret this function as a \textbf{wiring} operation. This function connects as an input of the integrator, represented by the \textbf{Integrator} type, the output of a polynomial circuit, represented by the value with \texttt{CT Double} type. Because the operation is just setting up the connections between the two, the functions ends with the type \texttt{CT ()}. +Going back to the type signature of the \textit{updateInteg}, \texttt{Integrator -> CT Double -> CT ()}, we can interpret this function as a \textit{wiring} operation. This function connects as an input of the integrator, represented by the \textit{Integrator} type, the output of a polynomial circuit, represented by the value with \texttt{CT Double} type. Because the operation is just setting up the connections between the two, the functions ends with the type \texttt{CT ()}. -A polynomial circuit can have the time $t$ or an output of another integrator as inputs, with restricted feedback (rule 1). This rule is being matched by the following: the \texttt{CT} type makes time available to the circuits, and the \textit{readInteg} function allows us to read the output of another integrators. The second rule, related to multiple inputs in the combinational circuit, is being followed because we can link inputs using arithmetic operations, feature provided by the \texttt{Num} typeclass. Moreover, because the sole purpose of \texttt{FACT} is to solve differential equations, we are \textbf{only} interested in circuits that calculates integrals, meaning that it is guaranteed that the integrand of the integrator will always be the output of a polynomial unit (rule 3), as we saw with the type signature of the \textit{updateInteg} function. The forth rule is also being attended it, given that the solver methods inside the \textit{updateInteg} function always calculate the integral in respect to the time variable. Figure \ref{fig:gpacBind2} summarizes these last mappings between the implementation, and FF-GPAC's integrator and rules of composition. +A polynomial circuit can have the time $t$ or an output of another integrator as inputs, with restricted feedback (rule 1). This rule is being matched by the following: the \texttt{CT} type makes time available to the circuits, and the \textit{readInteg} function allows us to read the output of another integrators. The second rule, related to multiple inputs in the combinational circuit, is being followed because we can link inputs using arithmetic operations, feature provided by the \texttt{Num} typeclass. Moreover, because the sole purpose of \texttt{FACT} is to solve differential equations, we are \textit{only} interested in circuits that calculates integrals, meaning that it is guaranteed that the integrand of the integrator will always be the output of a polynomial unit (rule 3), as we saw with the type signature of the \textit{updateInteg} function. The forth rule is also being attended it, given that the solver methods inside the \textit{updateInteg} function always calculate the integral in respect to the time variable. Figure \ref{fig:gpacBind2} summarizes these last mappings between the implementation, and FF-GPAC's integrator and rules of composition. \figuraBib{GPACBind2}{The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt{CT} and \texttt{Integrator} types match the four basic units}{}{fig:gpacBind2}{width=.9\textwidth}% @@ -300,7 +300,7 @@ integEuler diff init compute = do \end{code} } -On line 5, it is possible to see which functions are available in order to execute a step in the solver. The dependency \texttt{diff} is the representation of the differential equation itself. The initial value, $y(t_0)$, can be obtained by applying any \texttt{Parameters} record to the \texttt{init} dependency function. The next dependency, \texttt{compute}, execute everything previously defined in \textit{updateInteg}; thus effectively executing a new step using the \textbf{same} solver. The result of \texttt{compute} depends on which parametric record will be applied, meaning that we call a new and different solver step in the current one, potentially building a chain of solver step calls. This mechanism --- of executing again a solver step, inside the solver itself --- is the aforementioned implicit recursion, described in the earlier section. By changing the \texttt{ps} record, originally obtained via the \texttt{ReaderT} with the \texttt{ask} function, to the \textbf{previous} moment and iteration with the solver starting from initial stage, it is guaranteed that for any step the previous one can be computed, a requirement when using numerical methods. +On line 5, it is possible to see which functions are available in order to execute a step in the solver. The dependency \texttt{diff} is the representation of the differential equation itself. The initial value, $y(t_0)$, can be obtained by applying any \texttt{Parameters} record to the \texttt{init} dependency function. The next dependency, \texttt{compute}, execute everything previously defined in \textit{updateInteg}; thus effectively executing a new step using the \textit{same} solver. The result of \texttt{compute} depends on which parametric record will be applied, meaning that we call a new and different solver step in the current one, potentially building a chain of solver step calls. This mechanism --- of executing again a solver step, inside the solver itself --- is the aforementioned implicit recursion, described in the earlier section. By changing the \texttt{ps} record, originally obtained via the \texttt{ReaderT} with the \texttt{ask} function, to the \textit{previous} moment and iteration with the solver starting from initial stage, it is guaranteed that for any step the previous one can be computed, a requirement when using numerical methods. With this in mind, the solver function treats the initial value case as the base case of the recursion, whilst it treats normally the remaining ones (line 9). In the base case (lines 7 and 8), the outcome is obtained by just returning the continuous machine with the initial value. Otherwise, it is necessary to know the result from the previous iteration in order to generate the current one. To address this requirement, the solver builds another parametric record (lines 10 to 13) and call another solver step (line 14). Also, it calculates the value from applying this record to \texttt{diff} (line 15), the differential equation. These machines, based on \texttt{compute} and \texttt{diff}, need to be modified with a value of type \texttt{Parameters} containing the previous iteration (so-called \texttt{psy} in the code). Hence, the function \texttt{local} is used to alterate the existing parameters value in those readers. @@ -416,4 +416,4 @@ integRK4 f i y = do \end{code} } -This finishes this chapter, where we incremented the capabilities of the \texttt{CT} type and used it in combination with a brand-new type, the \texttt{Integrator}. Together these types represent the mathematical integral operation. The solver methods are involved within this implementation, and they use an implicit recursion to maintain their sequential behaviour. Also, those abstractions were mapped to FF-GPAC's ideas in order to bring some formalism to the project. However, the used mechanisms, such as implicit recursion and memory manipulation, make it hard to visualize how to execute the project given a description of a physical system. The next chapter, \textit{Execution Walkthrough}, will introduce the \textbf{driver} of the simulation and present a step-by-step concrete example. Later on, we will improve the DSL to completely remove all the noise introduced in its use because of such implicit recursion. +This finishes this chapter, where we incremented the capabilities of the \texttt{CT} type and used it in combination with a brand-new type, the \texttt{Integrator}. Together these types represent the mathematical integral operation. The solver methods are involved within this implementation, and they use an implicit recursion to maintain their sequential behaviour. Also, those abstractions were mapped to FF-GPAC's ideas in order to bring some formalism to the project. However, the used mechanisms, such as implicit recursion and memory manipulation, make it hard to visualize how to execute the project given a description of a physical system. The next chapter, \textit{Execution Walkthrough}, will introduce the \textit{driver} of the simulation and present a step-by-step concrete example. Later on, we will improve the DSL to completely remove all the noise introduced in its use because of such implicit recursion. diff --git a/doc/MastersThesis/Lhs/Interpolation.lhs b/doc/MastersThesis/Lhs/Interpolation.lhs index ea139f5..b38d73b 100644 --- a/doc/MastersThesis/Lhs/Interpolation.lhs +++ b/doc/MastersThesis/Lhs/Interpolation.lhs @@ -25,22 +25,22 @@ iterToTime interv solver n (SolverStage st) = \end{code} } -The previous chapter ended anouncing that drawbacks are present in the current implementation. This chapter will introduce the first concern: numerical methods do not reside in the continuous domain, the one we are actually interested in. After this chapter, this domain issue will be addressed via \textbf{interpolation}, with a few tweaks in the integrator and driver. +The previous chapter ended anouncing that drawbacks are present in the current implementation. This chapter will introduce the first concern: numerical methods do not reside in the continuous domain, the one we are actually interested in. After this chapter, this domain issue will be addressed via \textit{interpolation}, with a few tweaks in the integrator and driver. \section{Time Domains} -When dealing with continuous time, \texttt{FACT} changes the domain in which \textbf{time} is being modeled. Figure \ref{fig:timeDomains} shows the domains that the implementation interact with during execution: +When dealing with continuous time, \texttt{FACT} changes the domain in which \textit{time} is being modeled. Figure \ref{fig:timeDomains} shows the domains that the implementation interact with during execution: \figuraBib{TimeDomains}{During simulation, functions change the time domain to the one that better fits certain entities, such as the \texttt{Solver} and the driver. The image is heavily inspired by a figure in~\cite{Edil2017}}{}{fig:timeDomains}{width=.85\textwidth}% -The problems starts in the physical domain. The goal is to obtain a value of an unknown function $y(t)$ at time $t_x$. However, because the solution is based on \textbf{numerical methods} a sampling process occurs and the continuous time domain is transformed into a \textbf{discrete} time domain, where the solver methods reside --- those are represented by the functions \textit{integEuler}, \textit{integRK2} and \textit{integRK4}. A solver depends on the chosen time step to execute a numerical algorithm. Thus, time is modeled by the sum of $t_0$ with $n\Delta$, where $n$ is a natural number. Hence, from the solver perspective, time is always dependent on the time step, i.e., only values that can be described as $t_0 + n\Delta$ can be properly visualized by the solver. Finally, there's the \textbf{iteration} domain, used by the driver functions, \textit{runCT} and \textit{runCTFinal}. When executing the driver, one of its first steps is to call the function \textit{iterationsBnds}, which converts the simulation time interval to a tuple of numbers that represent the amount of iterations based on the time step of the solver. This function is presented bellow: +The problems starts in the physical domain. The goal is to obtain a value of an unknown function $y(t)$ at time $t_x$. However, because the solution is based on \textit{numerical methods} a sampling process occurs and the continuous time domain is transformed into a \textit{discrete} time domain, where the solver methods reside --- those are represented by the functions \textit{integEuler}, \textit{integRK2} and \textit{integRK4}. A solver depends on the chosen time step to execute a numerical algorithm. Thus, time is modeled by the sum of $t_0$ with $n\Delta$, where $n$ is a natural number. Hence, from the solver perspective, time is always dependent on the time step, i.e., only values that can be described as $t_0 + n\Delta$ can be properly visualized by the solver. Finally, there's the \textit{iteration} domain, used by the driver functions, \textit{runCT} and \textit{runCTFinal}. When executing the driver, one of its first steps is to call the function \textit{iterationsBnds}, which converts the simulation time interval to a tuple of numbers that represent the amount of iterations based on the time step of the solver. This function is presented bellow: \begin{spec} iterationBnds :: Interval -> Double -> (Int, Int) iterationBnds interv dt = (0, ceiling ((stopTime interv - startTime interv) / dt)) \end{spec} -To achieve the total number of iterations, the function \textit{iterationBnds} does a \textbf{ceiling} operation on the sampled result of iterations, based on the time interval (\textit{startTime} and \textit{stopTime}) and the time step (\texttt{dt}). The second member of the tuple is always the answer, given that it is assumed that the first member of the tuple is always zero. +To achieve the total number of iterations, the function \textit{iterationBnds} does a \textit{ceiling} operation on the sampled result of iterations, based on the time interval (\textit{startTime} and \textit{stopTime}) and the time step (\texttt{dt}). The second member of the tuple is always the answer, given that it is assumed that the first member of the tuple is always zero. The function that allows us to go back to the discrete time domain being in the iteration axis is the \textit{iterToTime} function. It uses the solver information, the current iteration and the interval to transition back to time, as depicted by the following code: @@ -64,7 +64,7 @@ iterToTime interv solver n st = A transformation from iteration to time depends on and on the chosen solver method due to their next step functions. For instance, the second and forth order Runge-Kutta methods have more stages, and it uses fractions of the time step for more granular use of the derivative function. This is why lines 11 and 12 are using half of the time step. Moreover, all discrete time calculations assume that the value starts from the beginning of the simulation (\textit{startTime}). The result is obtained by the sum of the initial value, the solver-dependent \textit{delta} function and the iteration times the solver time step (line 6). -There is, however, a missing transition: from the discrete time domain to the domain of interest in CPS --- the continuous time axis. This means that if the time value $t_x$ is not present from the solver point of view, it is not possible to obtain $y(t_x)$. The proposed solution is to add an \textbf{interpolation} function into the pipeline, which addresses this transition. Thus, values in between solver steps will be transfered back to the continuous domain. +There is, however, a missing transition: from the discrete time domain to the domain of interest in CPS --- the continuous time axis. This means that if the time value $t_x$ is not present from the solver point of view, it is not possible to obtain $y(t_x)$. The proposed solution is to add an \textit{interpolation} function into the pipeline, which addresses this transition. Thus, values in between solver steps will be transfered back to the continuous domain. \section{Tweak I: Interpolation} @@ -97,7 +97,7 @@ type instead of the original \texttt{Int} previously proposed (in chapter 2, \te functions like \textit{integEuler}, \textit{iterToTime}, and \textit{runCT} need to be updated accordingly. In all of those instances, processing will just continue normally; \texttt{SolverStage} will be used. -Next, the driver needs to be updated. So, the proposed mechanism is the following: the driver will identify these corner cases and communicate to the integrator --- via the new \texttt{Stage} field in the \texttt{Solver} data type --- that the interpolation needs to be added into the pipeline of execution. When this flag is not on, i.e., the \texttt{Stage} informs to continue execution normally, the implementation goes as the previous chapters detailed. This behaviour is altered \textbf{only} in particular scenarios, which the driver will be responsible for identifying. +Next, the driver needs to be updated. So, the proposed mechanism is the following: the driver will identify these corner cases and communicate to the integrator --- via the new \texttt{Stage} field in the \texttt{Solver} data type --- that the interpolation needs to be added into the pipeline of execution. When this flag is not on, i.e., the \texttt{Stage} informs to continue execution normally, the implementation goes as the previous chapters detailed. This behaviour is altered \textit{only} in particular scenarios, which the driver will be responsible for identifying. It remains to re-implement the driver functions. The driver will notify the integrator that an interpolation needs to take place. The code below shows these changes: @@ -147,7 +147,7 @@ runCT m t sl = in init values ++ [runReaderT m ps] \end{spec} -The implementation of \textit{iterationBnds} uses \textit{ceiling} function because this rounding is used to go to the iteration domain. However, given that the interpolation \textbf{requires} both solver steps --- the one that came before $t_x$ and the one immediately +The implementation of \textit{iterationBnds} uses \textit{ceiling} function because this rounding is used to go to the iteration domain. However, given that the interpolation \textit{requires} both solver steps --- the one that came before $t_x$ and the one immediately afterwards --- the number of iterations needs always to surpass the requested time. For instance, the time 5.3 seconds will demand the fifth and sixth iterations with a time step of 1 second. When using \textit{ceiling}, it is assured that the value of interest will be in the interval of computed values. So, when dealing with 5.3, the integrator will calculate all values up to 6 seconds. Lines 5 to 15 are equal to the previous implementation of the \textit{runCT} function. On line 16, the discrete version of \texttt{t}, \texttt{disct}, will be used for detecting if an @@ -185,7 +185,7 @@ interpolate m = do in z1 + (z2 - z1) * pure ((t - t1) / (t2 - t1)) \end{code} -Lines 1 to 5 continues the simulation with the normal workflow. If a corner case comes in, the reminaing code applies \textbf{linear interpolation} to it. It accomplishes this by first comparing the next and previous discrete times (lines 16 and 19) relative to \texttt{x} (line 11) --- the discrete counterpart of the time of interest \texttt{t} (line 9). These time points are calculated by their correspondent iterations (lines 12 and 13). Then, the integrator calculates the outcomes at these two points, i.e., do applications of the previous and next modeled times points with their respective parametric records (lines 22 and 23). Finally, line 24 executes the linear interpolation with the obtained values that surround the non-discrete time point. This particular interpolation was chosen for the sake of simplicity, but it can be replaced by higher order methods. Figure \ref{fig:interpolate} illustrates the effect of the \textit{interpolate} function when converting domains. +Lines 1 to 5 continues the simulation with the normal workflow. If a corner case comes in, the reminaing code applies \textit{linear interpolation} to it. It accomplishes this by first comparing the next and previous discrete times (lines 16 and 19) relative to \texttt{x} (line 11) --- the discrete counterpart of the time of interest \texttt{t} (line 9). These time points are calculated by their correspondent iterations (lines 12 and 13). Then, the integrator calculates the outcomes at these two points, i.e., do applications of the previous and next modeled times points with their respective parametric records (lines 22 and 23). Finally, line 24 executes the linear interpolation with the obtained values that surround the non-discrete time point. This particular interpolation was chosen for the sake of simplicity, but it can be replaced by higher order methods. Figure \ref{fig:interpolate} illustrates the effect of the \textit{interpolate} function when converting domains. \begin{spec} updateInteg :: Integrator -> CT Double -> CT () diff --git a/doc/MastersThesis/Lhs/Introduction.lhs b/doc/MastersThesis/Lhs/Introduction.lhs index 276ef8f..2f8e612 100644 --- a/doc/MastersThesis/Lhs/Introduction.lhs +++ b/doc/MastersThesis/Lhs/Introduction.lhs @@ -9,7 +9,7 @@ import MastersThesis.Lhs.Enlightenment Continuous behaviours are deeply embedded into the real world. However, even our most advanced computers are not capable of completely modeling such phenomena due to its discrete nature; thus becoming a still-unsolved challenge. Cyber-physical systems (CPS) --- the integration of computers and physical processes~\cite{LeeModeling, LeeChallenges} --- tackles this problem by attempting to include into the \textit{semantics} of computing the physical notion of \textit{time}~\cite{LeeChallenges, Lee2016, Lee2014, Ungureanu2018, Seyed2020, Edil2021}, i.e., treating time as a measurement of \textit{correctness}, not \textit{performance}~\cite{LeeModeling} nor just an accident of implementation~\cite{LeeChallenges}. Additionally, many systems perform in parallel, which requires precise and sensitive management of time; a non-achievable goal by using traditional computing abstractions, e.g., \textit{threads}~\cite{LeeChallenges}. -Examples of these concepts are older than the digital computers; analog computers were used to model battleships' fire systems and core functionalities of fly-by-wire aircraft~\cite{Graca2003}. The mechanical metrics involved in these problems change continuously, such as space, speed and area, e.g., the firing's range and velocity are crucial in fire systems, and surfaces of control are indispensable to model aircraft's flaps. The main goal of such models was, and still is, to abstract away the continuous facet of the scenario to the computer. In this manner, the human in the loop aspect only matters when interfacing with the computer, with all the heavy-lifting being done by formalized use of shafts and gears in analog machines~\cite{Shannon, Bush1931, Graca2003}, and by \textbf{software} after the digital era. +Examples of these concepts are older than the digital computers; analog computers were used to model battleships' fire systems and core functionalities of fly-by-wire aircraft~\cite{Graca2003}. The mechanical metrics involved in these problems change continuously, such as space, speed and area, e.g., the firing's range and velocity are crucial in fire systems, and surfaces of control are indispensable to model aircraft's flaps. The main goal of such models was, and still is, to abstract away the continuous facet of the scenario to the computer. In this manner, the human in the loop aspect only matters when interfacing with the computer, with all the heavy-lifting being done by formalized use of shafts and gears in analog machines~\cite{Shannon, Bush1931, Graca2003}, and by \textit{software} after the digital era. Within software, the aforementioned issues --- the lack of time semantics and the wrong tools for implementing concurrency --- are only a glimpse of serious concerns orbiting around CPS. The main villain is that today's computer science and engineering primarily focus on matching software demands, not expressing essential aspects of physical systems~\cite{LeeChallenges, LeeComponent}. Further, its sidekick is the weak formalism surrounding the semantics of model-based design tools; modeling languages whose semantics are defined by the tools rather than by the language itself~\cite{LeeComponent}, encouraging ad-hoc design practices, thus adding inertia into a dangerous legacy we want to avoid~\cite{Churchill1943}. With this in mind, Lee advocated that leveraging better formal abstractions is the paramount goal to advance continuous time modeling~\cite{LeeChallenges, LeeComponent}. More importantly, these new ideas need to embrace the physical world, taking into account predictability, reliability and interoperability. @@ -17,26 +17,32 @@ The development of a \textit{model of computation} (MoC) to define and express m Ingo et al. went even further~\cite{Sander2017} by presenting a framework based on the idea of tagged systems, known as \textit{ForSyDe}. The tool's main goal is to push system design to a higher level of abstraction, by combining MoCs with the functional programming paradigm. The technique separates the design into two phases, specification and synthesis. The former stage, specification, focus on creating a high-level abstraction model, in which mathematical formalism is taken into account. The latter part, synthesis, is responsible for applying design transformations --- the model is adapted to ForSyDe's semantics --- and mapping this result onto a chosen architecture for later be implemented in a target programming language or hardware platform~\cite{Sander2017}. Afterward, Seyed-Hosein and Ingo~\cite{Seyed2020} created a co-simulation architecture for multiple models based on ForSyDe's methodology, addressing heterogeneity across languages and tools with different semantics. One example of such tools treated in the reference is Simulink~\footnote{Simulink \href{http://www.mathworks.com/products/simulink/}{\textcolor{blue}{documentation}}.}, the de facto model-based design tool that lacks a formal semantics basis~\cite{Seyed2020}. Simulink being the standard tool for modeling means that, despite all the effort into utilizing a formal approach to model-based design, this is still an open problem. -\section{Proposal} +\section{Contribution} +\label{sec:intro} -The aforementioned works --- the formal notion of MoCs, the ForSyDe framework and its interaction with modeling-related tools like Simulink --- comprise the domain of model-based design or \textbf{model-based engineering}. Furthermore, the main goal of the present work contribute to this area of CPS by creating a domain-specific language tool (DSL) for simulating continuous-time systems that addresses the absence of a formal basis. Thus, this tool will help to cope with the incompatibility of the mentioned sets of abstractions~\cite{LeeChallenges} --- the discreteness of digital computers with the continuous nature of physical phenomena. +The aforementioned works --- the formal notion of MoCs, the ForSyDe framework and its interaction with modeling-related tools like Simulink --- comprise the domain of model-based design or \textit{model-based engineering}. Furthermore, the main goal of the present work contribute to this area of CPS by creating a domain-specific language tool (DSL) for simulating continuous-time systems that addresses the absence of a formal basis. Thus, this tool will help to cope with the incompatibility of the mentioned sets of abstractions~\cite{LeeChallenges} --- the discreteness of digital computers with the continuous nature of physical phenomena. -The proposed DSL has three special properties of interest: it needs to be a set of well-defined \textit{operational} semantics, thus being \textbf{executable}; it needs to be related to a \textit{formalized} reasoning process; and it should bring familiarity in its use to the \textit{system's designer} -- the pilot of the DSL which strives to execute a given specification or golden model. The first aspect provides \textbf{verification via simulation}, a type of verification that is useful when dealing with \textbf{non-preserving} semantic transformations, i.e., modifications and tweaks in the model that do not assure that properties are being preserved. Such phenomena are common within the engineering domain, given that a lot of refinement goes into the modeling process in which previous proof-proved properties are not guaranteed to be maintained after iterations with the model. A work-around solution for this problem would be to prove again that the features are in fact present in the new model; an impractical activity when models start to scale in size and complexity. Thus, by using an executable tool as a virtual workbench, models that suffered from those transformations could be extensively tested and verified. +The proposed DSL has three special properties of interest: -In order to address the second property, a solid and formal foundation, the tool is inspired by the general-purpose analog computer (GPAC) formal guidelines, proposed by Shannon~\cite{Shannon} in 1941. This concept was developed to model a Differential Analyzer --- an analog computer composed by a set of interconnected gears and shafts intended to solve numerical problems~\cite{Graca2004}. The mechanical parts represents \textit{physical quantities} and their interaction results in solving differential equations, a common activity in engineering, physics and other branches of science~\cite{Shannon}. The model was based on a set of black boxes, so-called \textit{circuits} or \textit{analog units}, and a set of proved theorems that guarantees that the composition of these units are the minimum necessary to model the system, given some conditions. For instance, if a system is composed by a set of \textit{differentially algebraic} equations with prescribed initial conditions~\cite{Graca2003}, then a GPAC circuit can be built to model it. Later on, some extensions of the original GPAC were developed, going from solving unaddressed problems contained in the original scope of the model~\cite{Graca2003} all the way to make GPAC capable of expressing generable functions, Turing universality and hypertranscendental functions~\cite{Graca2004, Graca2016}. Furthermore, although the analog computer has been forgotten in favor of its digital counterpart~\cite{Graca2003}, recent studies in the development of hybrid systems~\cite{Edil2018} brought GPAC back to the spotlight in the CPS domain. +\begin{itemize} +\item it needs to be a set of well-defined \textit{operational} semantics, thus being \textit{executable}; +\item it needs to be related to a \textit{formalized} process; +\item it should be \textit{concise}; its lack of noise will bring familiarity to the \textit{system's designer} -- the pilot of the DSL which strives to execute a given specification or golden model. +\end{itemize} -Finally, the third property of interest, the designer's familiarity between the mathematical specification and -the DSL's usability, will be assured by the use of the \textit{fixed-point combinator}; a mathematical construct used in the DSL's machineary to hide implementation details noise from the user's perspective, keeping on the surface only the constructs that matter from the designer's point of view. Hence, it is expected that one with less programming experience but familiar with the system's mathematical description will be able to leverage the DSL either when improving the system's description, using the DSL as a refinment tool, or as a way to execute an already specified system. The present work being a direct continuation~\cite{Lemos2022}, it is important to highlight that this final property is the differentiating factor between the two pieces. +The first aspect provides \textit{verification via simulation}, a type of verification that is useful when dealing with \textit{non-preserving} semantic transformations, i.e., modifications and tweaks in the model that do not assure that properties are being preserved. Such phenomena are common within the engineering domain, given that a lot of refinement goes into the modeling process in which previous proof-proved properties are not guaranteed to be maintained after iterations with the model. A work-around solution for this problem would be to prove again that the features are in fact present in the new model; an impractical activity when models start to scale in size and complexity. Thus, by using an executable tool as a virtual workbench, models that suffered from those transformations could be extensively tested and verified. -With these three core properties in mind, the proposed DSL will translate GPAC's original set of black boxes to some executable software leveraging mathematical constructs to simplify its usability. +In order to address the second property, a solid and formal foundation, the tool is inspired by the general-purpose analog computer (GPAC) formal guidelines, proposed by Shannon~\cite{Shannon} in 1941. This concept was developed to model a Differential Analyzer --- an analog computer composed by a set of interconnected gears and shafts intended to solve numerical problems~\cite{Graca2004}. The mechanical parts represents \textit{physical quantities} and their interaction results in solving differential equations, a common activity in engineering, physics and other branches of science~\cite{Shannon}. The model was based on a set of black boxes, so-called \textit{circuits} or \textit{analog units}, and a set of proved theorems that guarantees that the composition of these units are the minimum necessary to model the system, given some conditions. For instance, if a system is composed by a set of \textit{differentially algebraic} equations with prescribed initial conditions~\cite{Graca2003}, then a GPAC circuit can be built to model it. Later on, some extensions of the original GPAC were developed, going from solving unaddressed problems contained in the original scope of the model~\cite{Graca2003} all the way to make GPAC capable of expressing generable functions, Turing universality and hypertranscendental functions~\cite{Graca2004, Graca2016}. Furthermore, although the analog computer has been forgotten in favor of its digital counterpart~\cite{Graca2003}, recent studies in the development of hybrid systems~\cite{Edil2018} brought GPAC back to the spotlight in the CPS domain. -\section{Goal} -\label{sec:intro} +Finally, the third property of interest, conciseness to improve +the DSL's usability, will be assured by the use of the \textit{fixed-point combinator}; a mathematical construct used in the DSL's machinery to hide implementation details noise from the user's perspective, keeping on the surface only the constructs that matter from the designer's point of view. As the dissertation will explain, this happens due to an \textit{abstraction leak} in the original DSL~\cite{Lemos2022}, identified via +an overloaded syntax. +Once the leak is solved, it is expected that the \textit{target audience} --- system's designers with less programming experience but familiar with the system's mathematical description --- will be able to leverage the DSL either when improving the system's description, using the DSL as a refinement tool, or as a way to execute an already specified system. The present work being a direct continuation~\cite{Lemos2022}, it is important to highlight that this final property is the main differentiating factor between the two pieces. -The main goal of the present work is to build a library that can solve differential equations and resembles the core idea of the GPAC model. The programming language of choice was \textbf{Haskell}, due to a variety of different reasons. First, the approach of making specialized programming languages, or \textit{vocabularies}, within consistent and well-defined host programming languages has already proven to be valuable, as noted by Landin~\cite{Landin1966}. Second, this strategy is already being used in the CPS domain in some degree, as showed by the ForSyDe framework~\cite{Sander2017, Seyed2020}. Third, Lee describes a lot of properties~\cite{LeeModeling} that matches the functional programming paradigm almost perfectly: +With these three core properties in mind, the proposed DSL will translate GPAC's original set of black boxes to some executable software leveraging mathematical constructs to simplify its usability. The programming language of choice was \textit{Haskell}, due to a variety of different reasons. First, the approach of making specialized programming languages, or \textit{vocabularies}, within consistent and well-defined host programming languages has already proven to be valuable, as noted by Landin~\cite{Landin1966}. Second, this strategy is already being used in the CPS domain in some degree, as showed by the ForSyDe framework~\cite{Sander2017, Seyed2020}. Third, Lee describes a lot of properties~\cite{LeeModeling} that matches the functional programming paradigm almost perfectly: \begin{itemize} - \item Prevent misconnected MoCs by using great interfaces in between $\Rightarrow$ Such interfaces can be built using Haskell's \textbf{strong type system} + \item Prevent misconnected MoCs by using great interfaces in between $\Rightarrow$ Such interfaces can be built using Haskell's \textit{strong type system} \item Enable composition of MoCs $\Rightarrow$ Composition is a first-class feature in functional programming languages \item It should be possible to conjoin a functional model with an implementation model $\Rightarrow$ Functions programming languages makes a clear the separation between the \textit{denotational} aspect of the program, i.e., its meaning, from the \textit{operational} functionality \item All too often the semantics emerge accidentally from the software implementation rather than being built-in from the start $\Rightarrow$ A denotative approach with no regard for implementation details is common in the functional paradigm @@ -94,7 +100,8 @@ Chapter 2, \textit{Design Philosophy}, presents the foundation of this work, sta original work and this work are far apart, the mathematical base is the same. Chapters 3 to 6 describe future improvements made in 2022~\cite{Lemos2022} and 2023~\cite{EdilLemos2023}. These chapters go in detail about the DSL's implementation details, such as the used abstractions, going through executable examples, pointing out and addressing problems in its usability and design. Issues like performance, and continuous time implementation are explained -and then addressed. The latest of this work is concentrated in Chapter 7, \textit{Fixing Recursion}, which dedicates itself to improving an abstraction +and then addressed. Whilst the implementation of Chapters 2 to 6 were vastly improved during the making of this dissertation, the latest inclusion to this research is +concentrated in Chapter 7, \textit{Fixing Recursion}, which dedicates itself to improving an abstraction leak in the most recent published version of the DSL~\cite{EdilLemos2023}. Those improvements leverage the \textit{fixed point combinator} to eliminate -abstraction leaks, thus making the DSL more familiar to a system's designer. +abstraction leaks, thus making the DSL more concise and familiar to a system's designer. These enhacements were submitted and are waiting approval in a related journal~\footnote{\href{https://www.cambridge.org/core/journals/journal-of-functional-programming}{\textcolor{blue}{Journal of Functional Programming}}.}. Finally, limitations, future improvements and final thoughts are drawn in chapter 8, \textit{Conclusion}. diff --git a/doc/MastersThesis/thesis.lhs b/doc/MastersThesis/thesis.lhs index 84d068f..163eb74 100644 --- a/doc/MastersThesis/thesis.lhs +++ b/doc/MastersThesis/thesis.lhs @@ -43,7 +43,7 @@ \autor{Eduardo L.}{Rocha}% -\titulo{Continuous Time Modeling Made Functional: Fixing Differential Equations with Haskell}% +\titulo{FFACT: A Fix-based Domain-Specific Language based on a Functional Algebra for Continuous Time Modeling}% \palavraschave{equações diferenciais, sistemas contínuos, GPAC, integrador, ponto fixo, recursão monádica} \keywords{differential equations, continuous systems, GPAC, integrator, fixed-point, fixed-point combinator, monadic recursion}% diff --git a/doc/MastersThesis/thesis.lof b/doc/MastersThesis/thesis.lof index 6c2b8e9..aecc6d9 100644 --- a/doc/MastersThesis/thesis.lof +++ b/doc/MastersThesis/thesis.lof @@ -9,12 +9,12 @@ \contentsline {figure}{\numberline {2.3}{\ignorespaces Types are not just labels; they enhance the manipulated data with new information. Their difference in shape can work as the interface for the data.}}{10}{figure.caption.11}% \contentsline {figure}{\numberline {2.4}{\ignorespaces Functions' signatures are contracts; they purespecify which shape the input information has as well as which shape the output information will have.}}{10}{figure.caption.11}% \contentsline {figure}{\numberline {2.5}{\ignorespaces Sum types can be understood in terms of sets, in which the members of the set are available candidates for the outer shell type. Parity and possible values in digital states are examples.}}{11}{figure.caption.12}% -\contentsline {figure}{\numberline {2.6}{\ignorespaces Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textbf {record} alongside with the constructor, where the labels for each member inside it are explicit.}}{11}{figure.caption.13}% +\contentsline {figure}{\numberline {2.6}{\ignorespaces Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textit {record} alongside with the constructor, where the labels for each member inside it are explicit.}}{11}{figure.caption.13}% \contentsline {figure}{\numberline {2.7}{\ignorespaces Depending on the application, different representations of the same structure need to used due to the domain of interest and/or memory constraints.}}{12}{figure.caption.14}% \contentsline {figure}{\numberline {2.8}{\ignorespaces The minimum requirement for the \texttt {Ord} typeclass is the $<=$ operator, meaning that the functions $<$, $<=$, $>$, $>=$, \texttt {max} and \texttt {min} are now unlocked for the type \texttt {ClockTime} after the implementation. Typeclasses can be viewed as a third dimension in a type.}}{12}{figure.caption.15}% \contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above is common.}}{13}{figure.caption.16}% \contentsline {figure}{\numberline {2.10}{\ignorespaces The initial value is used as a starting point for the procedure. The algorithm continues until the time of interest is reached in the unknown function. Due to its large time step, the final answer is really far-off from the expected result.}}{15}{figure.caption.17}% -\contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textbf {function}, in which providing a floating point value as time returns another value as outcome.}}{15}{figure.caption.18}% +\contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textit {function}, in which providing a floating point value as time returns another value as outcome.}}{15}{figure.caption.18}% \contentsline {figure}{\numberline {2.12}{\ignorespaces The \texttt {Parameters} type represents a given moment in time, carrying over all the necessary information to execute a solver step until the time limit is reached. Some useful typeclasses are being derived to these types, given that Haskell is capable of inferring the implementation of typeclasses in simple cases.}}{16}{figure.caption.19}% \contentsline {figure}{\numberline {2.13}{\ignorespaces The \texttt {CT} type is a function of from time related information to an arbitrary potentially effectful outcome value.}}{17}{figure.caption.20}% \contentsline {figure}{\numberline {2.14}{\ignorespaces The \texttt {CT} type can leverage monad transformers in Haskell via \texttt {Reader} in combination with \texttt {IO}.}}{17}{figure.caption.21}% @@ -30,7 +30,7 @@ \contentsline {figure}{\numberline {4.1}{\ignorespaces The integrator functions are essential to create and interconnect combinational and feedback-dependent circuits.}}{32}{figure.caption.29}% \contentsline {figure}{\numberline {4.2}{\ignorespaces The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description.}}{32}{figure.caption.30}% \contentsline {figure}{\numberline {4.3}{\ignorespaces Because the list implements the \texttt {Traversable} typeclass, it allows this type to use the \textit {traverse} and \textit {sequence} functions, in which both are related to changing the internal behaviour of the nested structures.}}{33}{figure.caption.31}% -\contentsline {figure}{\numberline {4.4}{\ignorespaces A \textbf {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{33}{figure.caption.32}% +\contentsline {figure}{\numberline {4.4}{\ignorespaces A \textit {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{33}{figure.caption.32}% \contentsline {figure}{\numberline {4.5}{\ignorespaces When building a model for simulation, the above pipeline is always used, from both points of view. The operations with meaning, i.e., the ones in the \texttt {Semantics} pipeline, are mapped to executable operations in the \texttt {Operational} pipeline, and vice-versa.}}{34}{figure.caption.33}% \contentsline {figure}{\numberline {4.6}{\ignorespaces Using only FF-GPAC's basic units and their composition rules, it's possible to model the Lorenz Attractor example.}}{37}{figure.caption.34}% \contentsline {figure}{\numberline {4.7}{\ignorespaces After \textit {createInteg}, this record is the final image of the integrator. The function \textit {initialize} gives us protecting against wrong records of the type \texttt {Parameters}, assuring it begins from the first iteration, i.e., $t_0$.}}{38}{figure.caption.35}% @@ -45,8 +45,8 @@ \contentsline {figure}{\numberline {5.4}{\ignorespaces The new \textit {updateInteg} function add linear interpolation to the pipeline when receiving a parametric record.}}{48}{figure.caption.43}% \addvspace {10\p@ } \contentsline {figure}{\numberline {6.1}{\ignorespaces With just a few iterations, the exponential behaviour of the implementation is already noticeable.}}{50}{figure.caption.45}% -\contentsline {figure}{\numberline {6.2}{\ignorespaces The new \textit {createInteg} function relies on interpolation composed with memoization. Also, this combination \textbf {produces} results from the computation located in a different memory region, the one pointed by the \texttt {computation} pointer in the integrator.}}{56}{figure.caption.47}% -\contentsline {figure}{\numberline {6.3}{\ignorespaces The function \textbf {reads} information from the caching pointer, rather than the pointer where the solvers compute the results.}}{57}{figure.caption.48}% +\contentsline {figure}{\numberline {6.2}{\ignorespaces The new \textit {createInteg} function relies on interpolation composed with memoization. Also, this combination \textit {produces} results from the computation located in a different memory region, the one pointed by the \texttt {computation} pointer in the integrator.}}{56}{figure.caption.47}% +\contentsline {figure}{\numberline {6.3}{\ignorespaces The function \textit {reads} information from the caching pointer, rather than the pointer where the solvers compute the results.}}{57}{figure.caption.48}% \contentsline {figure}{\numberline {6.4}{\ignorespaces The new \textit {updateInteg} function gives to the solver functions access to the region with the cached data.}}{58}{figure.caption.49}% \contentsline {figure}{\numberline {6.5}{\ignorespaces Caching changes the direction of walking through the iteration axis. It also removes an entire pass through the previous iterations.}}{59}{figure.caption.50}% \contentsline {figure}{\numberline {6.6}{\ignorespaces By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation.}}{63}{figure.caption.53}% diff --git a/doc/MastersThesis/thesis.toc b/doc/MastersThesis/thesis.toc index c8c9fdb..722023c 100644 --- a/doc/MastersThesis/thesis.toc +++ b/doc/MastersThesis/thesis.toc @@ -2,9 +2,8 @@ \babel@toc {american}{}\relax \babel@toc {american}{}\relax \contentsline {chapter}{\numberline {1}Introduction}{1}{chapter.1}% -\contentsline {section}{\numberline {1.1}Proposal}{2}{section.1.1}% -\contentsline {section}{\numberline {1.2}Goal}{4}{section.1.2}% -\contentsline {section}{\numberline {1.3}Outline}{6}{section.1.3}% +\contentsline {section}{\numberline {1.1}Contribution}{2}{section.1.1}% +\contentsline {section}{\numberline {1.2}Outline}{6}{section.1.2}% \contentsline {chapter}{\numberline {2}Design Philosophy}{7}{chapter.2}% \contentsline {section}{\numberline {2.1}Shannon's Foundation: GPAC}{7}{section.2.1}% \contentsline {section}{\numberline {2.2}The Shape of Information}{9}{section.2.2}% @@ -33,16 +32,18 @@ \contentsline {section}{\numberline {6.6}Results with Caching}{61}{section.6.6}% \contentsline {chapter}{\numberline {7}Fixing Recursion}{64}{chapter.7}% \contentsline {section}{\numberline {7.1}Integrator's Noise}{64}{section.7.1}% -\contentsline {section}{\numberline {7.2}The Fixed-Point Combinator}{65}{section.7.2}% +\contentsline {section}{\numberline {7.2}The Fixed-Point Combinator}{66}{section.7.2}% \contentsline {section}{\numberline {7.3}Value Recursion with Fixed-Points}{67}{section.7.3}% \contentsline {section}{\numberline {7.4}Tweak IV: Fixing FACT}{70}{section.7.4}% \contentsline {chapter}{\numberline {8}Conclusion}{74}{chapter.8}% -\contentsline {section}{\numberline {8.1}Limitations}{74}{section.8.1}% -\contentsline {section}{\numberline {8.2}Future Improvements}{75}{section.8.2}% -\contentsline {section}{\numberline {8.3}Final Thoughts}{77}{section.8.3}% -\contentsline {chapter}{\numberline {9}Appendix}{78}{chapter.9}% -\contentsline {section}{\numberline {9.1}Literate Programming}{78}{section.9.1}% -\contentsline {chapter}{References}{80}{section*.57}% +\contentsline {section}{\numberline {8.1}Future Work}{74}{section.8.1}% +\contentsline {subsection}{\numberline {8.1.1}Formalism}{74}{subsection.8.1.1}% +\contentsline {subsection}{\numberline {8.1.2}Extensions}{75}{subsection.8.1.2}% +\contentsline {subsection}{\numberline {8.1.3}Refactoring}{75}{subsection.8.1.3}% +\contentsline {section}{\numberline {8.2}Final Thoughts}{76}{section.8.2}% +\contentsline {chapter}{\numberline {9}Appendix}{77}{chapter.9}% +\contentsline {section}{\numberline {9.1}Literate Programming}{77}{section.9.1}% +\contentsline {chapter}{References}{79}{section*.57}% \babel@toc {american}{}\relax \babel@toc {american}{}\relax \babel@toc {american}{}\relax From d2cbfaa0d67f7790234276fb40921ed926a29f60 Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Sun, 16 Mar 2025 20:07:23 -0300 Subject: [PATCH 03/10] Fix typo --- doc/MastersThesis/Lhs/Conclusion.lhs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/MastersThesis/Lhs/Conclusion.lhs b/doc/MastersThesis/Lhs/Conclusion.lhs index 18db9c8..e23e784 100644 --- a/doc/MastersThesis/Lhs/Conclusion.lhs +++ b/doc/MastersThesis/Lhs/Conclusion.lhs @@ -7,7 +7,7 @@ Chapters 2 and 3 explained the relationship between software, FF-GPAC and the ma One of the main concerns is the \textit{correctness} of \texttt{FACT} between its specification and its final implementation, i.e., refinement. Shannon's GPAC concept acted as the specification of the project, whilst the proposed software attempted to implement it. The criteria used to verify that the software fulfilled its goal were by using it for simulation and via code inspection, both of which are based on human analysis. This connection, however, was \textit{not} formally verified --- no model checking tools were used for its validation. In order to know that the mathematical description of the problem is being correctly mapped onto a model representation some formal work needs to be done. This was not explored, and it was considered out of the scope for this work. This lack of formalism extends to the typeclasses as well. The programming language of choice, Haskell, does \textit{not} provide any proofs that the created types actually follow the typeclasses' properties --- something that can be achieved with \textit{dependently typed} languages and/or tools such as Rocq, PVS, Agda, Idris and Lean. In Haskell, this burden is on the developer to manually write down such proofs, a non-explored aspect of this work. Hence, this work can be better understood as a \textit{proof of concept} for FFACT, and one potential improvement would be to port it to more powerful and specialized programming languages, such as the ones mentioned earlier. Because FP is highly encouraged in those languages, such port would not be a major roadblock. Thus, these tools would assure a solid mappping between the mathematical the description of the problem, GPAC's specification and FFACT's implementation, including the -use of chosen typeclasses. +use of the chosen typeclasses. \subsection{Extensions} From bf7dca9c2c981eb19dab7f86a1258a49b4ca3e50 Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Sun, 16 Mar 2025 20:14:23 -0300 Subject: [PATCH 04/10] Fix typo --- doc/MastersThesis/Lhs/Fixing.lhs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/MastersThesis/Lhs/Fixing.lhs b/doc/MastersThesis/Lhs/Fixing.lhs index ee26aa2..01fba27 100644 --- a/doc/MastersThesis/Lhs/Fixing.lhs +++ b/doc/MastersThesis/Lhs/Fixing.lhs @@ -21,7 +21,7 @@ it leaks noise into the designer's mind. The designer's concern should be to pay step of translation or noisy setups just adds an extra burden with no real gains on the engineering of simulating continuous time. This chapter will present \textit{FFACT}, an evolution of FACT which aims to reduce the noise even further. -It is worth noting that the term \textit{fixed-point} has different meanings in the domains of engineering and mathematics. When refericing the +It is worth noting that the term \textit{fixed-point} has different meanings in the domains of engineering and mathematics. When referecing the fractional representations within a computer, one may use the \textit{fixed-point method}. Thus, to avoid confusion, section~\ref{subsec:fix} starts by defining the term as a mathematical combinator that can be used to implement recursion. From d539f8fbc72a57e0e0d787cee23021da94b3cf5a Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Mon, 24 Mar 2025 01:33:48 -0300 Subject: [PATCH 05/10] Add Edil's comments --- doc/MastersThesis/Lhs/Appendix.lhs | 2 +- doc/MastersThesis/Lhs/Caching.lhs | 20 +-- doc/MastersThesis/Lhs/Conclusion.lhs | 22 ++- doc/MastersThesis/Lhs/Design.lhs | 12 +- doc/MastersThesis/Lhs/Enlightenment.lhs | 14 +- doc/MastersThesis/Lhs/Fixing.lhs | 24 +-- doc/MastersThesis/Lhs/Implementation.lhs | 28 +-- doc/MastersThesis/Lhs/Interpolation.lhs | 8 +- doc/MastersThesis/Lhs/Introduction.lhs | 215 ++++++++++++++++++++--- doc/MastersThesis/bibliography.bib | 10 ++ doc/MastersThesis/img/lorenzSimulink.pdf | Bin 0 -> 7225 bytes doc/MastersThesis/thesis.lhs | 3 + doc/MastersThesis/thesis.lof | 109 ++++++------ doc/MastersThesis/thesis.toc | 85 ++++----- src/Examples/Lorenz.hs | 9 +- 15 files changed, 378 insertions(+), 183 deletions(-) create mode 100644 doc/MastersThesis/img/lorenzSimulink.pdf diff --git a/doc/MastersThesis/Lhs/Appendix.lhs b/doc/MastersThesis/Lhs/Appendix.lhs index feeae86..c1f49e0 100644 --- a/doc/MastersThesis/Lhs/Appendix.lhs +++ b/doc/MastersThesis/Lhs/Appendix.lhs @@ -2,7 +2,7 @@ \section{Literate Programming} This dissertation made use of literate programming~\footnote{\href{https://en.wikipedia.org/wiki/Literate_programming}{\textcolor{blue}{Literate Programming}}.}, a concept -introduced by Donald Knuth~\cite{knuth1992}. Hence, this thesis can be executed using the same source files that the \texttt{PDF} is created +introduced by Donald Knuth~\cite{knuth1992}. Hence, this document can be executed using the same source files that the \texttt{PDF} is created This process requires the following dependencies: \begin{itemize} diff --git a/doc/MastersThesis/Lhs/Caching.lhs b/doc/MastersThesis/Lhs/Caching.lhs index 89fe112..2b287b6 100644 --- a/doc/MastersThesis/Lhs/Caching.lhs +++ b/doc/MastersThesis/Lhs/Caching.lhs @@ -62,7 +62,7 @@ subRunCTFinal m t sl = do \end{code} } -Chapter 5, \textit{Travelling across Domains}, leveraged a major concern with the proposed software: the solvers don't work in the domain of interest, continuous time. This chapter, \textit{Caching the Speed Pill}, addresses a second problem: the performance in \texttt{FACT}. At the end of it, the simulation will be orders of magnitude faster by using a common modern caching strategy to speed up computing processes: memoization. +Chapter 5, \textit{Travelling across Domains}, leveraged a major concern with the proposed software: the solvers don't work in the domain of interest, continuous time. This Chapter, \textit{Caching the Speed Pill}, addresses a second problem: the performance in \texttt{FACT}. At the end of it, the simulation will be orders of magnitude faster by using a common modern caching strategy to speed up computing processes: memoization. \section{Performance} @@ -91,7 +91,7 @@ Total of Iterations & Execution Time (milliseconds) & Consumed Memory (KB) \\ \ \section{The Saving Strategy} -Before explaining the solution, it is worth describing \textit{why} and \textit{where} this problem arises. First, we need to take a look back onto the solvers' functions, such as the \textit{integEuler} function, introduced in chapter 3, \textit{Effectful Integrals}: +Before explaining the solution, it is worth describing \textit{why} and \textit{where} this problem arises. First, we need to take a look back onto the solvers' functions, such as the \textit{integEuler} function, introduced in Chapter 3, \textit{Effectful Integrals}: \begin{spec} integEuler :: CT Double @@ -113,9 +113,9 @@ integEuler diff i y = do return v \end{spec} -From chapter 3, we know that lines 10 to 13 serve the purpose of creating a new parametric record to execute a new solver step for the \textit{previous} iteration, in order to calculate the current one. From chapter 4, this code section turned out to be where the implicit recursion came in, because the current iteration needs to calculate the previous one. Effectively, this means that for \textit{all} iterations, \textit{all} previous steps from each one needs to be calculated. The problem is now clear: unnecessary computations are being made for all iterations, because the same solvers steps are not being saved for future steps, although these values do \textit{not} change. In other words, to calculate step 3 of the solver, steps 1 and 2 are the same to calculate step 4 as well, but these values are being lost during the simulation. +From Chapter 3, we know that lines 10 to 13 serve the purpose of creating a new parametric record to execute a new solver step for the \textit{previous} iteration, in order to calculate the current one. From Chapter 4, this code section turned out to be where the implicit recursion came in, because the current iteration needs to calculate the previous one. Effectively, this means that for \textit{all} iterations, \textit{all} previous steps from each one needs to be calculated. The problem is now clear: unnecessary computations are being made for all iterations, because the same solvers steps are not being saved for future steps, although these values do \textit{not} change. In other words, to calculate step 3 of the solver, steps 1 and 2 are the same to calculate step 4 as well, but these values are being lost during the simulation. -To estimate how this lack of optimization affects performance, we can calculate how many solver steps will be executed to simulate theLorenz's Attractor example used in chapter 4, \textit{Execution Walkthrough}. The Table \ref{tab:solverSteps} shows the total number of solver steps needed per iteration simulating the Lorenz example with the Euler method. In addition, the amount of steps also increase depending on which solver method is being used, given that in the higher order Runge-Kutta methods, multiple stages count as a new step as well. +To estimate how this lack of optimization affects performance, we can calculate how many solver steps will be executed to simulate theLorenz's Attractor example used in Chapter 4, \textit{Execution Walkthrough}. The Table \ref{tab:solverSteps} shows the total number of solver steps needed per iteration simulating the Lorenz example with the Euler method. In addition, the amount of steps also increase depending on which solver method is being used, given that in the higher order Runge-Kutta methods, multiple stages count as a new step as well. \begin{table}[H] \centering @@ -138,7 +138,7 @@ This is the cause of the imense hit in performance. However, it also clarifies t The first tweak, \textit{Memoization}, alters the \texttt{Integrator} type. The integrator will now have a pointer to the memory region that stores the previous computed values, meaning that before executing a new computation, it will consult this region first. Because the process is executed in a \textit{sequential} manner, it is guaranteed that the previous result will be used. Thus, the accumulation of the solver steps will be addressed, and the amount of steps will be equal to the amount of iterations times how many stages the solver method uses. -The \textit{memo} function creates this memory region for storing values, as well as providing read access to it. This is the only function in \texttt{FACT} that uses a \textit{constraint}, i.e., it restricts the parametric types to the ones that have implemented the requirement. In our case, this function requires that the internal type \texttt{CT} dependency has implemented the \texttt{UMemo} typeclass. Because this typeclass is too complicated to be in the scope of this project, we will settle with the following explanation: it is required that the parametric values are capable of being contained inside an \textit{mutable} array, which is the case for our \texttt{Double} values. As dependencies, the \textit{memo} function receives the computation, as well as the interpolation function that is assumed to be used, in order to attenuate the domain problem described in the previous chapter. This means that at the end, the final result will be piped to the interpolation function. +The \textit{memo} function creates this memory region for storing values, as well as providing read access to it. This is the only function in \texttt{FACT} that uses a \textit{constraint}, i.e., it restricts the parametric types to the ones that have implemented the requirement. In our case, this function requires that the internal type \texttt{CT} dependency has implemented the \texttt{UMemo} typeclass. Because this typeclass is too complicated to be in the scope of this project, we will settle with the following explanation: it is required that the parametric values are capable of being contained inside an \textit{mutable} array, which is the case for our \texttt{Double} values. As dependencies, the \textit{memo} function receives the computation, as well as the interpolation function that is assumed to be used, in order to attenuate the domain problem described in the previous Chapter. This means that at the end, the final result will be piped to the interpolation function. \begin{code} memo :: UMemo e => (CT e -> CT e) -> CT e -> CT (CT e) @@ -199,9 +199,9 @@ data Integrator = Integrator { initial :: CT Double, } \end{code} -Next, two other functions need to be adapted: \textit{createInteg} and \textit{readInteg}. In the former function, the new pointer will be used, and it points to the region where the mutable array will be allocated. In the latter, instead of reading from the computation itself, the read-only pointer will be looking at the \textit{cached} version. These differences will be illustrated by using the same integrator and state variables used in the Lorenz's Attractor example, detailed in chapter 4, \textit{Execution Walkthrough}. +Next, two other functions need to be adapted: \textit{createInteg} and \textit{readInteg}. In the former function, the new pointer will be used, and it points to the region where the mutable array will be allocated. In the latter, instead of reading from the computation itself, the read-only pointer will be looking at the \textit{cached} version. These differences will be illustrated by using the same integrator and state variables used in the Lorenz's Attractor example, detailed in Chapter 4, \textit{Execution Walkthrough}. -The main difference in the updated version of the \textit{createInteg} function is the inclusion of the new pointer that reads the cached memory (lines 4 to 7). The pointer \texttt{computation}, which will be changed by \textit{updateInteg} in a model to the differential equation, is being read in lines 8 to 11 and piped with interpolation and memoization in line 12. This approach maintains the interpolation, justified in the previous chapter, and adds the aforementioned caching strategy. Finally, the final result is written in the memory region pointed by the caching pointer (line 13). +The main difference in the updated version of the \textit{createInteg} function is the inclusion of the new pointer that reads the cached memory (lines 4 to 7). The pointer \texttt{computation}, which will be changed by \textit{updateInteg} in a model to the differential equation, is being read in lines 8 to 11 and piped with interpolation and memoization in line 12. This approach maintains the interpolation, justified in the previous Chapter, and adds the aforementioned caching strategy. Finally, the final result is written in the memory region pointed by the caching pointer (line 13). Figure \ref{fig:createInteg} shows that the updated version of the \textit{createInteg} function is similar to the previous implementation. The new field, \texttt{cached}, is a pointer that refers to \texttt{readComp} --- the result of memoization (\texttt{memo}), interpolation (\texttt{interpolate}) and the value obtained by the region pointed by the \texttt{computation} pointer. Given a parametric record \texttt{ps}, \texttt{readComp} gives this record to the value stored in the region pointed by \texttt{computation}. This result is then interpolated via the \texttt{interpolate} block and it is used as a dependency for the \texttt{memo} block. @@ -286,7 +286,7 @@ Figure \ref{fig:memoDirection} depicts this stark difference in approach when us \section{Tweak III: Model and Driver} -The memoization added to \texttt{FACT} needs a second tweak, related to the executable models established in chapter 4. The code bellow is the same example model used in that chapter: +The memoization added to \texttt{FACT} needs a second tweak, related to the executable models established in Chapter 4. The code bellow is the same example model used in that Chapter: \begin{spec} exampleModel :: Model Vector @@ -354,7 +354,7 @@ The main change is the division of the driver into two: one dedicated to "initia \section{Results with Caching} -The following table (Table \ref{tab:betterResults}) shows the same Lorenz's Attractor example used in the first section, but with the preceding tweaks in the \texttt{Integrator} type and the integrator functions. It is worth noting that there is an overhead due to the memoization strategy when running fewer iterations (such as 1 in the table), in which +The following table (Table \ref{tab:betterResults}) shows the same Lorenz's Attractor example used in the first Section, but with the preceding tweaks in the \texttt{Integrator} type and the integrator functions. It is worth noting that there is an overhead due to the memoization strategy when running fewer iterations (such as 1 in the table), in which most time is spent preparing the caching setup --- the in-memory data structure, and etc.These modifications allows better and more complicated models to be simulated. For instance, the Lorenz example with a variety of total number of iterations can be checked in Table \ref{tab:masterResults} and in Figure \ref{fig:graph2}. \begin{table}[H] @@ -395,6 +395,6 @@ Total of Iterations & Execution Time (milliseconds) & Consumed Memory (MB) \\ \h \figuraBib{Graph2}{By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation}{}{fig:graph2}{width=.97\textwidth}% -The project is currently capable of executing interpolation as well as applying memoization to speed up results. These two drawback solutions, detailed in chapter 5 and 6, adds practicality to \texttt{FACT} as well as makes it more competitive. But we can, however, go even further and adds more familiarity to the DSL. The next chapter, \textit{Fixing Recursion}, will +The project is currently capable of executing interpolation as well as applying memoization to speed up results. These two drawback solutions, detailed in Chapter 5 and 6, adds practicality to \texttt{FACT} as well as makes it more competitive. But we can, however, go even further and adds more familiarity to the DSL. The next Chapter, \textit{Fixing Recursion}, will address this concern. diff --git a/doc/MastersThesis/Lhs/Conclusion.lhs b/doc/MastersThesis/Lhs/Conclusion.lhs index e23e784..6eb4189 100644 --- a/doc/MastersThesis/Lhs/Conclusion.lhs +++ b/doc/MastersThesis/Lhs/Conclusion.lhs @@ -1,4 +1,19 @@ -Chapters 2 and 3 explained the relationship between software, FF-GPAC and the mathematical world of differential equations. As a follow-up, Chapter 4 raised intuition and practical understanding of \texttt{FACT} via a detailed walkthrough of an example. Chapters 5, 6, and 7 identified some problems with the current implementation, such as lack of performance, the discrete time issue, DSL's conciseness, and addressed both problems via caching and interpolation. This chapter, \textit{Conclusion}, draws limitations, future improvements that can bring \texttt{FACT} to a higher level of abstraction and some final conclusions about the project. +Chapter 2 established the foundation of the implementation, introducing FP concepts and the necessary types +to model continuous time simulation --- with \texttt{CT} being the main type. Chapter 3 extended its power via +the implementation of typeclasses to add functionality for the \texttt{CT} type, such as binary operations and +numerical representation. Further, it also introduced the \texttt{Integrator}, a CRUD-like interface +for it, as well as the available numerical methods for simulation. +As a follow-up, Chapter 4 raised intuition and practical understanding of \texttt{FACT} via a detailed walkthrough of an example. +Chapter 5 explained and fixed the mix between different domains in the simulation, e.g., continuous time, discrete time and iterations, +via an additional linear interpolation when executing a model. Chapter 6 addressed performance concerns via a memoization strategy. Finally, +Chapter 7 introduced the fixed-point combinator in order to increase conciseness of the HEDSL, bringing more familiarity to systems designers +experienced with the mathematical descriptions of their systems of interest. This notation enhancement is the defining feature between FACT and FFACT. + +\section{Final Thoughts} + +When Shannon proposed a formal foundation for the Differential Analyzer~\cite{Shannon}, mathematical abstractions were leveraged to model continuous time. However, after the transistor era, a new set of concepts that lack this formal basis was developed, and some of which crippled our capacity of simulating reality. Later, the need for some formalism made a comeback for modeling physical phenomena with abstractions that take \textit{time} into consideration. Models of computation~\cite{LeeModeling, LeeChallenges, LeeComponent, LeeSangiovanni} and the ForSyDe framework~\cite{Sander2017, Seyed2020} are examples of this change in direction. Nevertheless, Shannon's original idea is now being discussed again with some improvements~\cite{Graca2003, Graca2004, Graca2016} and being transposed to high level programming languages in the hybrid system domain~\cite{Edil2018}. + +The \texttt{FACT} EDSL~\footnote{\texttt{FACT} \href{https://github.com/FP-Modeling/fact/releases/tag/3.0}{\textcolor{blue}{source code}}.} follows this path of bringing CPS simulation to the highest level of abstraction, via the Haskell programming language, but still taking into account a formal background inspired by the GPAC model. The software uses advanced functional programming techniques to solve differential equations, mapping the abstractions to FF-GPAC's analog units. Although still limited by the discrete nature of numerical methods, the solution is performant and accurate enough for studies in the cyber-physical domain. \section{Future Work} @@ -23,8 +38,3 @@ For instance, if the reader and state monads, something like the \texttt{RWS} mo Also, there's GPAC and its mapping to Haskell features. As explained previously, some basic units of GPAC are being modeled by the \texttt{Num} typeclass, present in Haskell's \texttt{Prelude} module. By using more specific and customized numerical typeclasses~\footnote{Examples of \href{https://guide.aelve.com/haskell/alternative-preludes-zr69k1hc}{\textcolor{blue}{alternative preludes}}.}, it might be possible to better express these basic units and take advantage of better performance and convenience that these alternatives provide. -\section{Final Thoughts} - -When Shannon proposed a formal foundation for the Differential Analyzer~\cite{Shannon}, mathematical abstractions were leveraged to model continuous time. However, after the transistor era, a new set of concepts that lack this formal basis was developed, and some of which crippled our capacity of simulating reality. Later, the need for some formalism made a comeback for modeling physical phenomena with abstractions that take \textit{time} into consideration. Models of computation~\cite{LeeModeling, LeeChallenges, LeeComponent, LeeSangiovanni} and the ForSyDe framework~\cite{Sander2017, Seyed2020} are examples of this change in direction. Nevertheless, Shannon's original idea is now being discussed again with some improvements~\cite{Graca2003, Graca2004, Graca2016} and being transposed to high level programming languages in the hybrid system domain~\cite{Edil2018}. - -The \texttt{FACT} EDSL~\footnote{\texttt{FACT} \href{https://github.com/FP-Modeling/fact/releases/tag/3.0}{\textcolor{blue}{source code}}.} follows this path of bringing CPS simulation to the highest level of abstraction, via the Haskell programming language, but still taking into account a formal background inspired by the GPAC model. The software uses advanced functional programming techniques to solve differential equations, mapping the abstractions to FF-GPAC's analog units. Although still limited by the discrete nature of numerical methods, the solution is performant and accurate enough for studies in the cyber-physical domain. diff --git a/doc/MastersThesis/Lhs/Design.lhs b/doc/MastersThesis/Lhs/Design.lhs index ee24d94..70e609b 100644 --- a/doc/MastersThesis/Lhs/Design.lhs +++ b/doc/MastersThesis/Lhs/Design.lhs @@ -6,7 +6,7 @@ import Control.Monad.Trans.Reader ( ReaderT ) \end{code} } -In the previous chapter, the importance of making a bridge between two different sets of abstractions --- computers and the physical domain --- was established. This chapter will explain the core philosophy behind the implementation of this link, starting with an introduction to GPAC, followed by the type and typeclass systems used in Haskell, as well as understanding how to model the main entities of the problem. At the end, the presented modeling strategy will justify the data types used in the solution, paving the way for the next chapter \textit{Effectful Integrals}. +In the previous Chapter, the importance of making a bridge between two different sets of abstractions --- computers and the physical domain --- was established. This Chapter will explain the core philosophy behind the implementation of this link, starting with an introduction to GPAC, followed by the type and typeclass systems used in Haskell, as well as understanding how to model the main entities of the problem. At the end, the presented modeling strategy will justify the data types used in the solution, paving the way for the next Chapter \textit{Effectful Integrals}. \section{Shannon's Foundation: GPAC} \label{sec:gpac} @@ -35,7 +35,7 @@ Composition rules that restrict how these units can be hooked to one another. Sh \item Each variable of integration of an integrator is the input \textit{t}. \end{itemize} -During the definition of the DSL, parallels will map the aforementioned basic units and composition rules to the implementation. With this strategy, all the mathematical formalism leveraged for analog computers will drive the implementation in the digital computer. Although we do not formally prove a refinement between the GPAC theory, i.e., our epurespecification, and the final implementation of \texttt{FACT}, is an attempt to build a tool with formalism taken into account; one of the most frequent critiques in the CPS domain, as explained in the previous chapter. +During the definition of the DSL, parallels will map the aforementioned basic units and composition rules to the implementation. With this strategy, all the mathematical formalism leveraged for analog computers will drive the implementation in the digital computer. Although we do not formally prove a refinement between the GPAC theory, i.e., our epurespecification, and the final implementation of \texttt{FACT}, is an attempt to build a tool with formalism taken into account; one of the most frequent critiques in the CPS domain, as explained in the previous Chapter. \section{The Shape of Information} \label{sec:types} @@ -192,7 +192,7 @@ $$ y_{5} = y_4 + 1 * f(4, y_4) \rightarrow y_{5} = 27 + 1 * (27 + 4) \rightarrow \section{Making Mathematics Cyber} -Our primary goal is to combine the knowledge levered in section \ref{sec:types} --- modeling capabilities of Haskell's algebraic type system --- with the core notion of differential equations presented in section \ref{sec:diff}. The type system will model equation \ref{eq:nextStep}, detailed in the previous section. +Our primary goal is to combine the knowledge levered in Section \ref{sec:types} --- modeling capabilities of Haskell's algebraic type system --- with the core notion of differential equations presented in Section \ref{sec:diff}. The type system will model equation \ref{eq:nextStep}, detailed in the previous Section. Any representation of a physical system that can be modeled by a set of differential equations has an outcome value at any given moment in time. The type \texttt{CT} (stands for \textit{continuous machine}) in Figure \ref{fig:firstDynamics} is a first draft of representing the continuous physical dynamics~\cite{LeeModeling} --- the evolution of a system state in time: @@ -281,9 +281,9 @@ data Parameters = Parameters { interval :: Interval, \label{fig:dynamicsAux} \end{figure} -The above auxiliary types serve a common purpose: to provide at any given moment in time, all the information to execute a solver method until the end of the simulation. The type \texttt{Interval} determines when the simulation should start and when it should end. The \texttt{Method} sum type is used inside the \texttt{Solver} type to set solver sensible information, such as the size of the time step, which method will be used and in which stage the method is in at the current moment (more about the stage field on a later chapter). Finally, the \texttt{Parameters} type combines everything together, alongside with the current time value as well as its discrete counterpart, iteration. +The above auxiliary types serve a common purpose: to provide at any given moment in time, all the information to execute a solver method until the end of the simulation. The type \texttt{Interval} determines when the simulation should start and when it should end. The \texttt{Method} sum type is used inside the \texttt{Solver} type to set solver sensible information, such as the size of the time step, which method will be used and in which stage the method is in at the current moment (more about the stage field on a later Chapter). Finally, the \texttt{Parameters} type combines everything together, alongside with the current time value as well as its discrete counterpart, iteration. -Further, the new \texttt{CT} type can also be parametrically polymorphic, removing the limitation of only using \texttt{Double} values as the outcome. Figure \ref{fig:dynamics} depicts the final type for the physical dynamics. The \texttt{IO} wrapper is needed to cope with memory management and side effects, all of which will be explained in the next chapter. Below, +Further, the new \texttt{CT} type can also be parametrically polymorphic, removing the limitation of only using \texttt{Double} values as the outcome. Figure \ref{fig:dynamics} depicts the final type for the physical dynamics. The \texttt{IO} wrapper is needed to cope with memory management and side effects, all of which will be explained in the next Chapter. Below, we have the definition for the \texttt{CT} type used in previous work~\cite{Lemos2022}: \begin{figure}[H] @@ -371,4 +371,4 @@ instance (Floating a) => Floating (CT a) where \end{code} } -This summarizes the main pilars in the design: FF-GPAC, the mathematical definition of the problem and how we are modeling this domain in Haskell. The next chapter, \textit{Effectful Integrals}, will start from this foundation, by adding typeclasses to the \texttt{CT} type, and will later describe the last core type before explaining the solver execution: the \texttt{Integrator} type. These improvements for the \texttt{CT} type and the new \texttt{Integrator} type will later be mapped to their FF-GPAC counterparts, explaining that they resemble the basic units mentioned in section \ref{sec:gpac}. +This summarizes the main pilars in the design: FF-GPAC, the mathematical definition of the problem and how we are modeling this domain in Haskell. The next Chapter, \textit{Effectful Integrals}, will start from this foundation, by adding typeclasses to the \texttt{CT} type, and will later describe the last core type before explaining the solver execution: the \texttt{Integrator} type. These improvements for the \texttt{CT} type and the new \texttt{Integrator} type will later be mapped to their FF-GPAC counterparts, explaining that they resemble the basic units mentioned in Section \ref{sec:gpac}. diff --git a/doc/MastersThesis/Lhs/Enlightenment.lhs b/doc/MastersThesis/Lhs/Enlightenment.lhs index 60beea2..8892444 100644 --- a/doc/MastersThesis/Lhs/Enlightenment.lhs +++ b/doc/MastersThesis/Lhs/Enlightenment.lhs @@ -34,7 +34,7 @@ oldLorenzSystem = runCTFinal oldLorenzModel 100 lorenzSolver \end{code} } -Previously, we presented in detail the latter core type of the implementation, the \texttt{Integrator}, as well as why it can model an integral when used with the \texttt{CT} type. This chapter is a follow-up, and its objectives are threefold: describe how to map a set of differential equations to an executable model, reveal which functions execute a given example and present a guided-example as a proof-of-concept. +Previously, we presented in detail the latter core type of the implementation, the \texttt{Integrator}, as well as why it can model an integral when used with the \texttt{CT} type. This Chapter is a follow-up, and its objectives are threefold: describe how to map a set of differential equations to an executable model, reveal which functions execute a given example and present a guided-example as a proof-of-concept. \section{From Models to Models} @@ -73,7 +73,7 @@ $\dot{y} = y + t \quad \quad y(0) = 1$ \figuraBib{Rivika2GPAC}{The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description}{}{fig:rivika2gpac}{width=.8\textwidth}% -In line 5, a record with type \texttt{Integrator} is created, with $1$ being the initial condition of the system. Line 6 creates a \textit{state variable}, a label that gives us access to the output of an integrator, \texttt{integ} in this case. Afterward, in line 7, the \textit{updateInteg} function connects the inputs to a given integrator by creating a combinational circuit, \texttt{(y + t)}. Polynomial circuits and integrators' outputs can be used as available inputs, as well as the \textit{time} of the simulation. Finally, line 8 returns the state variable as the output for the \textit{driver}, the main topic of the next section. +In line 5, a record with type \texttt{Integrator} is created, with $1$ being the initial condition of the system. Line 6 creates a \textit{state variable}, a label that gives us access to the output of an integrator, \texttt{integ} in this case. Afterward, in line 7, the \textit{updateInteg} function connects the inputs to a given integrator by creating a combinational circuit, \texttt{(y + t)}. Polynomial circuits and integrators' outputs can be used as available inputs, as well as the \textit{time} of the simulation. Finally, line 8 returns the state variable as the output for the \textit{driver}, the main topic of the next Section. There is, however, an useful improvement to be made into the definition of a model within the DSL. The presented example used only a single state variable, although it is common to have \textit{multiple} state variables, i.e., multiple integrators interacting with each other, modeling different aspects of a given scenario. Moreover, when dealing with multiple state variables, it is important to maintain \textit{synchronization} between them, i.e., the same \texttt{Parameters} is being applied to \textit{all} state variables at the same time. @@ -143,13 +143,13 @@ runCT m t sl = On line 3, we convert the final \textit{time value} for the simulation into an interval value for the simulation (\texttt{iv}) --- the simulation always starts at 0 and goes all the way up to the requested time. Next up, on line 4, we convert the interval to an \textit{iteration} interval in the format of a tuple, i.e., the continuous interval becomes the tuple $(0, \frac{stopTime - startTime}{timeStep})$, in which the second value of the tuple is \textit{rounded}. From line 5 to line 11, we are defining an auxiliary function \textit{parameterise}. This function picks a natural number, which represents the iteration index, and creates a new record with the type \texttt{Parameters}. Additionally, it uses the auxiliary function \textit{iterToTime} (line 7), which converts the iteration number from -the domain of discrete \textit{steps} to the domain of \textit{discrete time}, i.e., the time the solver methods can operate with (chapter 5 will explore more of this concept). This conversion is based on the time step being used, as well as which method and in which stage it is for that specific iteration. Finally, line 13 produces the outcome of the \textit{runCT} function. The final result is the output from a function called \textit{map} piped it as an argument for the \textit{sequence} function. +the domain of discrete \textit{steps} to the domain of \textit{discrete time}, i.e., the time the solver methods can operate with (Chapter 5 will explore more of this concept). This conversion is based on the time step being used, as well as which method and in which stage it is for that specific iteration. Finally, line 13 produces the outcome of the \textit{runCT} function. The final result is the output from a function called \textit{map} piped it as an argument for the \textit{sequence} function. The \textit{map} operation is provided by the \texttt{Functor} of the list monad, and it applies an arbitrary function to the internal members of a list in a \textit{sequential} manner. In this case, the \textit{parameterise} function, composed with the continuous machine \texttt{m}, is the one being mapped. Thus, a custom value of the type \texttt{Parameters} is taking place of each natural natural number in the list, and this is being applied to the received \texttt{CT} value. It produces a list of answers in order, each one wrapped in the \texttt{IO} monad. To abstract out the \texttt{IO}, thus getting \texttt{IO [a]} rather than \texttt{[IO a]}, the \textit{sequence} function finishes the implementation. Additionally, there is an analogous implementation of this function, so-called \textit{runCTFinal}, that return only the final result of the simulation instead of the outputs at the time step samples. \section{An attractive example} -For the example walkthrough, the same example introduced in the chapter \textit{Introduction} will be used in this section. So, we will be solving a system, composed by a set of chaotic solutions, called \textit{the Lorenz Attractor}. In these types of systems, the ordinary differential equations are used to model chaotic systems, providing solutions based on parameter values and initial conditions. The original differential equations are presented bellow: +For the example walkthrough, the same example introduced in the Chapter \textit{Introduction} will be used in this Section. So, we will be solving a system, composed by a set of chaotic solutions, called \textit{the Lorenz Attractor}. In these types of systems, the ordinary differential equations are used to model chaotic systems, providing solutions based on parameter values and initial conditions. The original differential equations are presented bellow: $$ \sigma = 10.0 $$ $$ \rho = 28.0 $$ @@ -197,7 +197,7 @@ The first records, \texttt{Solver}, sets the environment (lines 1 to 4). It conf After this overview, let's follow the execution path used by the compiler. Haskell's compiler works in a lazily manner, meaning that it calls for execution only the necessary parts. So, the first step calling \textit{lorenzSystem} is to call the \textit{runCT} function with a model, final time for the simulation and solver configurations. Following its path of execution, the \textit{map} function (inside the driver) forces the application of a parametric record generated by the \textit{parameterise} function to the provided model, \textit{lorenzModel} in this case. Thus, it needs to be executed in order to return from the \textit{runCT} function. -To understand the model, we need to follow the execution sequence of the output: \texttt{sequence [x, y, z]}, which requires executing all the lines before this line to obtain the all the state variables. For the sake of simplicity, we will follow the execution of the operations related to the $x$ variable, given that the remaining variables have an analogous execution walkthrough. First and foremost, memory is allocated for the integrator to work with (line 12). Figure \ref{fig:allocateExample} depicts this idea, as well as being a reminder of what the \textit{createInteg} and \textit{initialize} functions do, described in the chapter \textit{Effectful Integrals}. In this image, the integrator \texttt{integX} comprises two fields, \texttt{initial} and \texttt{computation}. The former is a simple value of the type \texttt{CT Double} that, regardless of the parameters record it receives, it returns the initial condition of the system. The latter is a pointer or address that references a specific \texttt{CT Double} computation in memory: in the case of receiving a parametric record \texttt{ps}, it fixes potential problems with it via the \texttt{initialize} block, and it applies this fixed value in order to get \texttt{i}, i.e., the initial value $1$, the same being saved in the other field of the record, \texttt{initial}. +To understand the model, we need to follow the execution sequence of the output: \texttt{sequence [x, y, z]}, which requires executing all the lines before this line to obtain the all the state variables. For the sake of simplicity, we will follow the execution of the operations related to the $x$ variable, given that the remaining variables have an analogous execution walkthrough. First and foremost, memory is allocated for the integrator to work with (line 12). Figure \ref{fig:allocateExample} depicts this idea, as well as being a reminder of what the \textit{createInteg} and \textit{initialize} functions do, described in the Chapter \textit{Effectful Integrals}. In this image, the integrator \texttt{integX} comprises two fields, \texttt{initial} and \texttt{computation}. The former is a simple value of the type \texttt{CT Double} that, regardless of the parameters record it receives, it returns the initial condition of the system. The latter is a pointer or address that references a specific \texttt{CT Double} computation in memory: in the case of receiving a parametric record \texttt{ps}, it fixes potential problems with it via the \texttt{initialize} block, and it applies this fixed value in order to get \texttt{i}, i.e., the initial value $1$, the same being saved in the other field of the record, \texttt{initial}. \figuraBib{ExampleAllocate}{After \textit{createInteg}, this record is the final image of the integrator. The function \textit{initialize} gives us protecting against wrong records of the type \texttt{Parameters}, assuring it begins from the first iteration, i.e., $t_0$}{}{fig:allocateExample}{width=.90\textwidth}% @@ -211,7 +211,7 @@ The final step is to \textit{change} the computation \textit{inside} the memory \figuraBib{ExampleFinalModel}{After setting up the environment, this is the final depiction of an independent variable. The reader $x$ reads the values computed by the procedure stored in memory, a second-order Runge-Kutta method in this case}{}{fig:finalModelExample}{width=.90\textwidth}% -Figure \ref{fig:finalModelExample} shows the final image for state variable $x$ after until this point in the execution. Lastly, the state variable is wrapped inside a list and it is applied to the \textit{sequence} function, as explained in the previous section. This means that the list of variable(s) in the model, with the signature \texttt{[CT Double]}, is transformed into a value with the type \texttt{CT [Double]}. The transformation can be visually understood when looking at Figure \ref{fig:finalModelExample}. Instead of picking one \texttt{ps} of type \texttt{Parameters} and returning a value \textit{v}, the same parametric record returns a \textit{list} of values, with the \textit{same} parametric dependency being applied to all state variables inside $[x, y, z]$. +Figure \ref{fig:finalModelExample} shows the final image for state variable $x$ after until this point in the execution. Lastly, the state variable is wrapped inside a list and it is applied to the \textit{sequence} function, as explained in the previous Section. This means that the list of variable(s) in the model, with the signature \texttt{[CT Double]}, is transformed into a value with the type \texttt{CT [Double]}. The transformation can be visually understood when looking at Figure \ref{fig:finalModelExample}. Instead of picking one \texttt{ps} of type \texttt{Parameters} and returning a value \textit{v}, the same parametric record returns a \textit{list} of values, with the \textit{same} parametric dependency being applied to all state variables inside $[x, y, z]$. However, this only addresses \textit{how} the driver triggers the entire execution, but does \textit{not} explain how the differential equations are actually being calculated with the \texttt{RK2} numerical method. This is done by the solver functions (\textit{integEuler}, \textit{integRK2} and \textit{integRK4}) and those are all based on equation \ref{eq:solverEquation} regardless of the chosen method. The equation goes as the following: @@ -229,7 +229,7 @@ It is worth mentioning that the dependency \texttt{c} is a call of a \textit{sol \section{Lorenz's Butterfly} -After all the explained theory behind the project, it remains to be seen if this can be converted into practical results. With certain constant values, the generated graph of the Lorenz's Attractor example used in the last chapter is known for oscillation and getting the shape of two fixed point attractors, meaning that the system evolves to an oscillating state even if slightly disturbed. As showed in Figure \ref{fig:lorenzPlots}, the obtained graph from the Lorenz's Attractor model matches what was expected for a Lorenz's system. It is worth noting that changing the values of $\sigma$, $\rho$ and $\beta$ can produce completely different answers, destroying the resembled "butterfly" shape of the graph. Although correct, the presented solution has a few drawbacks. The next three chapters will explain and address the identified problems with the current implementation. +After all the explained theory behind the project, it remains to be seen if this can be converted into practical results. With certain constant values, the generated graph of the Lorenz's Attractor example used in the last Chapter is known for oscillation and getting the shape of two fixed point attractors, meaning that the system evolves to an oscillating state even if slightly disturbed. As showed in Figure \ref{fig:lorenzPlots}, the obtained graph from the Lorenz's Attractor model matches what was expected for a Lorenz's system. It is worth noting that changing the values of $\sigma$, $\rho$ and $\beta$ can produce completely different answers, destroying the resembled "butterfly" shape of the graph. Although correct, the presented solution has a few drawbacks. The next three chapters will explain and address the identified problems with the current implementation. \figuraBib{LorenzPlot1}{The Lorenz's Attractor example has a very famous butterfly shape from certain angles and constant values in the graph generated by the solution of the differential equations.}{}{fig:lorenzPlots}{width=.90\textwidth}% diff --git a/doc/MastersThesis/Lhs/Fixing.lhs b/doc/MastersThesis/Lhs/Fixing.lhs index 01fba27..9307b76 100644 --- a/doc/MastersThesis/Lhs/Fixing.lhs +++ b/doc/MastersThesis/Lhs/Fixing.lhs @@ -18,18 +18,14 @@ The last improvement for FACT is in terms of \textit{familiarity}. When someone the main goal should be that the least amount of friction when using the simulation software, the better. Hence, the requirement of knowing implementation details or programming language details is something we would like to avoid, given that it leaks noise into the designer's mind. The designer's concern should be to pay attention to the system's description and FACT having an extra -step of translation or noisy setups just adds an extra burden with no real gains on the engineering of simulating continuous time. This chapter +step of translation or noisy setups just adds an extra burden with no real gains on the engineering of simulating continuous time. This Chapter will present \textit{FFACT}, an evolution of FACT which aims to reduce the noise even further. -It is worth noting that the term \textit{fixed-point} has different meanings in the domains of engineering and mathematics. When referecing the -fractional representations within a computer, one may use the \textit{fixed-point method}. Thus, to avoid confusion, section~\ref{subsec:fix} starts -by defining the term as a mathematical combinator that can be used to implement recursion. - \section{Integrator's Noise} Chapter 4, \textit{Execution Walkthrough}, described the semantics and usability on an example of a system in mathematical specification and its mapping to a simulation-ready description provided by FACT. -Below we have this example modeled using FACT (same code as provided in section~\ref{sec:intro}): +Below we have this example modeled using FACT (same code as provided in Section~\ref{sec:intro}): % \vspace{0.1cm} \begin{spec} @@ -55,7 +51,7 @@ lorenzModel = It is noticeable, however, that FACT imposes a significant amount of overhead from the user's perspective due to the \textbf{explicit use of integrators} for most memory-required simulations. When creating stateful circuits, an user of FACT is obligated to use the integrator's API, i.e., use the functions \texttt{createInteg} (lines 6 to 8), \texttt{readInteg} (lines 9 to 11), and \texttt{updateInteg} (lines 12 to 14). Although these functions remove the -management of the aforementioned implicit mutual recursion mentioned in chapter 3, \textit{Effectful Integrals}, from the user, it is still required to follow +management of the aforementioned implicit mutual recursion mentioned in Chapter 3, \textit{Effectful Integrals}, from the user, it is still required to follow a specific sequence of steps to complete a model for any simulation: % \begin{enumerate} @@ -64,7 +60,7 @@ a specific sequence of steps to complete a model for any simulation: \item Update integrators with the actual ODEs of interest (via the use of \textit{updateInteg}). \end{enumerate} -Visually, this step-by-step list for FACT's models follow the pattern detailed in Figure~\ref{fig:modelPipe} in chapter 4, \textit{Execution Walkthrough}. +Visually, this step-by-step list for FACT's models follow the pattern detailed in Figure~\ref{fig:modelPipe} in Chapter 4, \textit{Execution Walkthrough}. More importantly, \emph{all} those steps are visible and transparent from an usability's point of view. Hence, a system's designer \emph{must} be aware of this \emph{entire} sequence of mandatory steps, even if his interest probably only relates to lines 12 to 14. Although one's goal is being able to specify a system and start a simulation, there is no escape -- one has to bear the noise created due to @@ -79,6 +75,10 @@ required piece to get rid of the \texttt{Integrator} type, thus also removing it \section{The Fixed-Point Combinator} \label{subsec:fix} +It is worth noting that the term \textit{fixed-point} has different meanings in the domains of engineering and mathematics. When referecing the +fractional representations within a computer, one may use the \textit{fixed-point method}. Thus, to avoid confusion, the following is the definition +of such concept in this dissertation, alongside a set of examples of its use case as a mathematical combinator that can be used to implement recursion. + On the surface, the fixed-point combinator is a simple mapping that fulfills the following property: a point \emph{p} is a fixed-point of a function \emph{f} if \emph{f(p)} lies on the identity function, i.e., \emph{f(p) = p}. Not all functions have fixed-points, and some functions may have more than one~\cite{tennent1991}. @@ -189,7 +189,7 @@ By allowing this behavior, mutually recursive bindings are made possible and thu Haskell's vanilla \texttt{let} already acts like a \texttt{letrec}, and it would be useful to replicate this property to monadic bindings as well. In the case of the \texttt{counter} example, the execution of a side-effect is mandatory to evaluate the values of the bindings, such as \texttt{next}, \texttt{inc}, \texttt{out}, and \texttt{zero} (lines 2 to 5). -In contrast, the example \texttt{countDown} in section~\ref{subsec:fix} has none of its bindings locked by side-effects, e.g, the bindings \texttt{f} and \texttt{n} have nothing to do with the effect of printing a message on \texttt{stdout}. +In contrast, the example \texttt{countDown} in Section~\ref{subsec:fix} has none of its bindings locked by side-effects, e.g, the bindings \texttt{f} and \texttt{n} have nothing to do with the effect of printing a message on \texttt{stdout}. When dealing with the latter of these cases, the usual fixed-point combinator is enough to model its recursion. The former case, however, needs a special kind of recursion, so-called \emph{value recursion}~\cite{leventThesis}. @@ -328,11 +328,11 @@ lorenzSystem = runCT lorenzModel 100 lorenzSolver \end{code} Not surprisingly, the results of this new approach using the monadic fixed-point combinator are very similar to the -performance metrics depicted in chapter 6, \textit{Caching the Speed Pill} --- indicating that we are \textit{not} trading performance +performance metrics depicted in Chapter 6, \textit{Caching the Speed Pill} --- indicating that we are \textit{not} trading performance for a gain in conciseness. Figure~\ref{fig:fixed-graph} shows the new results: \figuraBib{Graph3}{Results of FFACT are similar to the final version of FACT.}{}{fig:fixed-graph}{width=.97\textwidth}% The function \texttt{integ} alone in FFACT ties the recursion knot previously done via the \texttt{computation} and \texttt{cache} fields from the original integrator data type in FACT. -Hence, a lot of implementation noise of the DSL is kept away from the user --- the designer of the system --- when using FFACT. With this chapter, we addressed -the third and final concerned explained in chapter 1, \textit{Introduction}. The final chapter, \textit{Conclusion}, will conclude this work, pointing out limitations of the project, as well as future improvements and final thoughts about the project. +Hence, a lot of implementation noise of the DSL is kept away from the user --- the designer of the system --- when using FFACT. With this Chapter, we addressed +the third and final concerned explained in Chapter 1, \textit{Introduction}. The final Chapter, \textit{Conclusion}, will conclude this work, pointing out limitations of the project, as well as future improvements and final thoughts about the project. diff --git a/doc/MastersThesis/Lhs/Implementation.lhs b/doc/MastersThesis/Lhs/Implementation.lhs index 9c98ab4..6a6eb6c 100644 --- a/doc/MastersThesis/Lhs/Implementation.lhs +++ b/doc/MastersThesis/Lhs/Implementation.lhs @@ -9,7 +9,7 @@ import Control.Monad.Trans.Reader \end{code} } -This chapter details the next steps to simulate continuous-time behaviours. It starts by enhancing the previously defined \texttt{CT} type by implementing some specific typeclasses. Next, the second core type of the simulation, the \texttt{Integrator} type, will be introduced alongside its functions. These improvements will then be compared to FF-GPAC's basic units, our source of formalism within the project. At the end of the chapter, an implicit recursion will be blended with a lot of effectful operations, making the \texttt{Integrator} type hard to digest. This will be addressed by a guided Lorenz Attractor example in the next chapter, \textit{Execution Walkthrough}. +This Chapter details the next steps to simulate continuous-time behaviours. It starts by enhancing the previously defined \texttt{CT} type by implementing some specific typeclasses. Next, the second core type of the simulation, the \texttt{Integrator} type, will be introduced alongside its functions. These improvements will then be compared to FF-GPAC's basic units, our source of formalism within the project. At the end of the Chapter, an implicit recursion will be blended with a lot of effectful operations, making the \texttt{Integrator} type hard to digest. This will be addressed by a guided Lorenz Attractor example in the next Chapter, \textit{Execution Walkthrough}. \section{Uplifting the CT Type} \label{sec:typeclasses} @@ -98,7 +98,7 @@ bind k (CT m) \label{fig:monad} \end{figure} -Aside from lifting operations, the final typeclass related to data manipulation is the \texttt{MonadIO} typeclass. It comprises only one function, \textit{liftIO}, and its purpose is to change the structure that is wrapping the value, going from an \texttt{IO} outer shell to the monad of interest, \texttt{CT} in this case. The usefulness of this typeclass will be more clear in the next topic, section \ref{sec:integrator}. The implementation is bellow, alongside its visual representation in Figure \ref{fig:monadIO}. Once again, consider the explicit +Aside from lifting operations, the final typeclass related to data manipulation is the \texttt{MonadIO} typeclass. It comprises only one function, \textit{liftIO}, and its purpose is to change the structure that is wrapping the value, going from an \texttt{IO} outer shell to the monad of interest, \texttt{CT} in this case. The usefulness of this typeclass will be more clear in the next topic, Section \ref{sec:integrator}. The implementation is bellow, alongside its visual representation in Figure \ref{fig:monadIO}. Once again, consider the explicit definition for the \texttt{CT} type instead of the type alias. \begin{figure}[ht!] @@ -130,13 +130,13 @@ binaryOP func da db = (fmap func da) <*> db \section{GPAC Bind I: CT} -After these improvements in the \texttt{CT} type, it is possible to map some of them to FF-GPAC's concepts. As we will see shortly, the implemented numerical typeclasses, when combined with the lifting typeclasses (\texttt{Functor}, \texttt{Applicative}, \texttt{Monad}), express 3 out of 4 FF-GPAC's basic circuits presented in Figure \ref{fig:gpacBasic} in the previous chapter. +After these improvements in the \texttt{CT} type, it is possible to map some of them to FF-GPAC's concepts. As we will see shortly, the implemented numerical typeclasses, when combined with the lifting typeclasses (\texttt{Functor}, \texttt{Applicative}, \texttt{Monad}), express 3 out of 4 FF-GPAC's basic circuits presented in Figure \ref{fig:gpacBasic} in the previous Chapter. First and foremost, all FF-GPAC units receive \textit{time} as an available input to compute. The \texttt{CT} type represents continuous physical dynamics~\cite{LeeModeling}, which means that it portrays a function from time to physical output. Hence, it already has time embedded into its definition; a record with type \texttt{Parameters} is received as a dependency to obtain the final result at that moment. Furthermore, it remains to model the FF-GPAC's black boxes and the composition rules that bind them together. The simplest unit of all, \texttt{Constant Unit}, can be achieved via the implementation of the \texttt{Applicative} and \texttt{Num} typeclasses. First, this unit needs to receive the time of simulation at that point, which is an granted by the \texttt{CT} type. Next, it needs to return a constant value $k$ for all moments in time. The \texttt{Num} given the \texttt{CT} type the option of using number representations, such as the types \texttt{Int}, \texttt{Integer}, \texttt{Float} and \texttt{Double}. Further, the \texttt{Applicative} typeclass can lift those number-related functions to the desired type by using the \textit{pure} function. -Arithmetic basic units, such as the \texttt{Adder Unit} and the \texttt{Multiplier Unit}, are being modeled by the \texttt{Functor}, \texttt{Applicative} and \texttt{Num} typeclasses. Those two units use binary operations with physical signals. As demonstrated in the previous section, the combination of numerical and lifting typeclasses let us to model such operations. Figure \ref{fig:gpacBind1} shows FF-GPAC's analog circuits alongside their \texttt{FACT} counterparts. The forth unit and the composition rules will be mapped after describing the second main type of \texttt{FACT}: the \texttt{Integrator} type. +Arithmetic basic units, such as the \texttt{Adder Unit} and the \texttt{Multiplier Unit}, are being modeled by the \texttt{Functor}, \texttt{Applicative} and \texttt{Num} typeclasses. Those two units use binary operations with physical signals. As demonstrated in the previous Section, the combination of numerical and lifting typeclasses let us to model such operations. Figure \ref{fig:gpacBind1} shows FF-GPAC's analog circuits alongside their \texttt{FACT} counterparts. The forth unit and the composition rules will be mapped after describing the second main type of \texttt{FACT}: the \texttt{Integrator} type. \figuraBib{GPACBind1}{The ability of lifting numerical values to the \texttt{CT} type resembles three FF-GPAC analog circuits: \texttt{Constant}, \texttt{Adder} and \texttt{Multiplier}}{}{fig:gpacBind1}{width=.9\textwidth}% @@ -157,9 +157,9 @@ The \texttt{CT} type directly interacts with a second type that intensively expl In low-level and imperative languages, such as C, Fortran, Zig, Rust, impurity is present across the program and can be easily and naturally added via \textit{pointers} --- addresses to memory regions where values, or even other pointers, can be stored. In contrast, functional programming languages advocate to a more explicit use of such aspect, given that it prioritizes pure and mathematical functions instead of allowing the developer to mix these two facets. So, the feature is still available but the developer has to take extra effort to add an effectful function into the program, clearly separating these two different styles of programming. -The second core type of the present work, the \texttt{Integrator}, is based on this idea of side effect operations, manipulating data directly in memory, always consulting and modifying data in the impure world. Foremost, it represents a differential equation, as explained in chapter 2, \textit{Design Philosophy} section \ref{sec:diff}, meaning that the \texttt{Integrator} type models the calculation of an \textit{integral}. It accomplishes this task by driving the numerical algorithms of a given solver method, implying that this is where the \textit{operational} semantics of our DSL reside. +The second core type of the present work, the \texttt{Integrator}, is based on this idea of side effect operations, manipulating data directly in memory, always consulting and modifying data in the impure world. Foremost, it represents a differential equation, as explained in Chapter 2, \textit{Design Philosophy} Section \ref{sec:diff}, meaning that the \texttt{Integrator} type models the calculation of an \textit{integral}. It accomplishes this task by driving the numerical algorithms of a given solver method, implying that this is where the \textit{operational} semantics of our DSL reside. -With this in mind, the \texttt{Integrator} type is responsible for executing a given solver method to calculate a given integral. This type comprises the initial value of the system, i.e., the value of a given function at time $t_0$, and a pointer to a memory region for future use, called \texttt{computation}. In Haskell, something similar to a pointer and memory allocation can be made by using the \texttt{IORef} type. This memory region is being allocated to be used with the type \texttt{CT Double}. Also, the initial value is also represented by \texttt{CT Double}, and the initial condition can be lifted to this type because the typeclass \texttt{Num} is implemented (section \ref{sec:typeclasses}). It is worth noticing that these pointers are pointing to functions or \textit{computations} and not to double precision values. +With this in mind, the \texttt{Integrator} type is responsible for executing a given solver method to calculate a given integral. This type comprises the initial value of the system, i.e., the value of a given function at time $t_0$, and a pointer to a memory region for future use, called \texttt{computation}. In Haskell, something similar to a pointer and memory allocation can be made by using the \texttt{IORef} type. This memory region is being allocated to be used with the type \texttt{CT Double}. Also, the initial value is also represented by \texttt{CT Double}, and the initial condition can be lifted to this type because the typeclass \texttt{Num} is implemented (Section \ref{sec:typeclasses}). It is worth noticing that these pointers are pointing to functions or \textit{computations} and not to double precision values. \begin{purespec} data Integrator = Integrator { initial :: CT Double, @@ -209,15 +209,15 @@ In the beginning of the function (line 3), we extract the initial value from the create a new computation, so-called \texttt{z} --- a function wrapped in the \texttt{CT} type that receives a \texttt{Parameters} record and computes the result based on the solving method. Because this computation needs to do lookups on some configuration values, we use the function \texttt{ask} (line 5) from \texttt{ReaderT} to get our environment values; this case a value of type \texttt{Parameters}. Later on, the follow-up step is to build a copy of the \textit{same process} being pointed by the \texttt{computation} pointer (line 6). -Finally, after checking the chosen solver (line 7), it is executed one iteration of the process by calling \textit{integEuler}, or \textit{integRK2} or \textit{integRK4}. After line 10, this entire process \texttt{z} is being pointed by the \texttt{computation} pointer, being done by the $writeIORef$ function~\footref{foot:IORef}. It may seem confusing that inside \texttt{z} we are \textit{reading} what is being pointed and later, on the last line of \textit{updateInteg}, this is being used on the final line to update that same pointer. This is necessary, as it will be explained in the next chapter \textit{Execution Walkthrough}, to allow the use of an \textit{implicit recursion} to assure the sequential aspect needed by the solvers. For now, the core idea is this: the \textit{updateInteg} function alters the \textit{future} computations; it rewrites which procedure will be pointed by the \texttt{computation} pointer. This new procedure, which we called \texttt{z}, creates an intermediate computation, \texttt{whatToDo} (line 6), that \textit{reads} what this pointer is addressing, which is \texttt{z} itself. +Finally, after checking the chosen solver (line 7), it is executed one iteration of the process by calling \textit{integEuler}, or \textit{integRK2} or \textit{integRK4}. After line 10, this entire process \texttt{z} is being pointed by the \texttt{computation} pointer, being done by the $writeIORef$ function~\footref{foot:IORef}. It may seem confusing that inside \texttt{z} we are \textit{reading} what is being pointed and later, on the last line of \textit{updateInteg}, this is being used on the final line to update that same pointer. This is necessary, as it will be explained in the next Chapter \textit{Execution Walkthrough}, to allow the use of an \textit{implicit recursion} to assure the sequential aspect needed by the solvers. For now, the core idea is this: the \textit{updateInteg} function alters the \textit{future} computations; it rewrites which procedure will be pointed by the \texttt{computation} pointer. This new procedure, which we called \texttt{z}, creates an intermediate computation, \texttt{whatToDo} (line 6), that \textit{reads} what this pointer is addressing, which is \texttt{z} itself. Initially, this strange behaviour may cause the idea that this computation will never halt. However, Haskell's \textit{laziness} assures that a given computation will not be computed unless it is necessary to continue execution and this is \textit{not} the case in the current stage, given that we are just setting the environment in the memory to further calculate the solution of the system. \section{GPAC Bind II: Integrator} -The \texttt{Integrator} type introduced in the previous section corresponds to FF-GPAC's forth and final basic unit, the integrator. The analog version of the integrator used in FF-GPAC had the goal of using physical systems (shafts and gears) that obeys the same mathematical relations that control other physical or technical phenomenon under investigation~\cite{Graca2004}. In contrast, the integrator modeled in {FACT} uses pointers in a digital computer that point to iteration-based algorithms that can approximate the solution of the problem at a requested moment $t$ in time. +The \texttt{Integrator} type introduced in the previous Section corresponds to FF-GPAC's forth and final basic unit, the integrator. The analog version of the integrator used in FF-GPAC had the goal of using physical systems (shafts and gears) that obeys the same mathematical relations that control other physical or technical phenomenon under investigation~\cite{Graca2004}. In contrast, the integrator modeled in {FACT} uses pointers in a digital computer that point to iteration-based algorithms that can approximate the solution of the problem at a requested moment $t$ in time. -Lastly, there are the composition rules in FF-GPAC --- constraints that describe how the units can be interconnected. The following are the same composition rules presented in chapter 2, \textit{Design Philosophy}, section \ref{sec:gpac}: +Lastly, there are the composition rules in FF-GPAC --- constraints that describe how the units can be interconnected. The following are the same composition rules presented in Chapter 2, \textit{Design Philosophy}, Section \ref{sec:gpac}: \begin{enumerate} \item An input of a polynomial circuit should be the input $t$ or the output of an integrator. Feedback can only be done from the output of integrators to inputs of polynomial circuits. @@ -226,7 +226,7 @@ Lastly, there are the composition rules in FF-GPAC --- constraints that describe \item Each variable of integration of an integrator is the input \textit{t}. \end{enumerate} -The preceding rules include defining connections with polynomial circuits --- an acyclic circuit composed only by constant functions, adders and multipliers. These special circuits are already being modeled in \texttt{FACT} by the \texttt{CT} type with a set of typeclasses, as explained in the previous section about GPAC. The \textit{integrator functions}, e.g., \textit{readInteg} and \textit{updateInteg}, represent the composition rules. +The preceding rules include defining connections with polynomial circuits --- an acyclic circuit composed only by constant functions, adders and multipliers. These special circuits are already being modeled in \texttt{FACT} by the \texttt{CT} type with a set of typeclasses, as explained in the previous Section about GPAC. The \textit{integrator functions}, e.g., \textit{readInteg} and \textit{updateInteg}, represent the composition rules. Going back to the type signature of the \textit{updateInteg}, \texttt{Integrator -> CT Double -> CT ()}, we can interpret this function as a \textit{wiring} operation. This function connects as an input of the integrator, represented by the \textit{Integrator} type, the output of a polynomial circuit, represented by the value with \texttt{CT Double} type. Because the operation is just setting up the connections between the two, the functions ends with the type \texttt{CT ()}. @@ -239,7 +239,7 @@ A polynomial circuit can have the time $t$ or an output of another integrator as \section{Using Recursion to solve Math} -The remaining topic of this chapter is to describe in detail how the solver methods are being implemented. There are three solvers currently implemented: +The remaining topic of this Chapter is to describe in detail how the solver methods are being implemented. There are three solvers currently implemented: \begin{itemize} \item Euler Method or First-order Runge-Kutta Method @@ -247,7 +247,7 @@ The remaining topic of this chapter is to describe in detail how the solver meth \item Forth-order Runge-Kutta Method \end{itemize} -To explain how the solvers work and their nuances, it is useful to go into the implementation of the simplest one --- the Euler method. However, the implementation of the solvers use a slightly different function for the next step or iteration in comparison to the one explained in chapter 2. Hence, it is worthwhile to remember how this method originally iterates in terms of its mathematical description and compare it to the new function. From equation \ref{eq:nextStep}, we can obtain a different function to next step, by subtracting the index from both sides of the equation: +To explain how the solvers work and their nuances, it is useful to go into the implementation of the simplest one --- the Euler method. However, the implementation of the solvers use a slightly different function for the next step or iteration in comparison to the one explained in Chapter 2. Hence, it is worthwhile to remember how this method originally iterates in terms of its mathematical description and compare it to the new function. From equation \ref{eq:nextStep}, we can obtain a different function to next step, by subtracting the index from both sides of the equation: \begin{equation} y_{n+1} = y_n + hf(t_n,y_n) \rightarrow y_n = y_{n-1} + hf(t_{n-1}, y_{n-1}) @@ -300,7 +300,7 @@ integEuler diff init compute = do \end{code} } -On line 5, it is possible to see which functions are available in order to execute a step in the solver. The dependency \texttt{diff} is the representation of the differential equation itself. The initial value, $y(t_0)$, can be obtained by applying any \texttt{Parameters} record to the \texttt{init} dependency function. The next dependency, \texttt{compute}, execute everything previously defined in \textit{updateInteg}; thus effectively executing a new step using the \textit{same} solver. The result of \texttt{compute} depends on which parametric record will be applied, meaning that we call a new and different solver step in the current one, potentially building a chain of solver step calls. This mechanism --- of executing again a solver step, inside the solver itself --- is the aforementioned implicit recursion, described in the earlier section. By changing the \texttt{ps} record, originally obtained via the \texttt{ReaderT} with the \texttt{ask} function, to the \textit{previous} moment and iteration with the solver starting from initial stage, it is guaranteed that for any step the previous one can be computed, a requirement when using numerical methods. +On line 5, it is possible to see which functions are available in order to execute a step in the solver. The dependency \texttt{diff} is the representation of the differential equation itself. The initial value, $y(t_0)$, can be obtained by applying any \texttt{Parameters} record to the \texttt{init} dependency function. The next dependency, \texttt{compute}, execute everything previously defined in \textit{updateInteg}; thus effectively executing a new step using the \textit{same} solver. The result of \texttt{compute} depends on which parametric record will be applied, meaning that we call a new and different solver step in the current one, potentially building a chain of solver step calls. This mechanism --- of executing again a solver step, inside the solver itself --- is the aforementioned implicit recursion, described in the earlier Section. By changing the \texttt{ps} record, originally obtained via the \texttt{ReaderT} with the \texttt{ask} function, to the \textit{previous} moment and iteration with the solver starting from initial stage, it is guaranteed that for any step the previous one can be computed, a requirement when using numerical methods. With this in mind, the solver function treats the initial value case as the base case of the recursion, whilst it treats normally the remaining ones (line 9). In the base case (lines 7 and 8), the outcome is obtained by just returning the continuous machine with the initial value. Otherwise, it is necessary to know the result from the previous iteration in order to generate the current one. To address this requirement, the solver builds another parametric record (lines 10 to 13) and call another solver step (line 14). Also, it calculates the value from applying this record to \texttt{diff} (line 15), the differential equation. These machines, based on \texttt{compute} and \texttt{diff}, need to be modified with a value of type \texttt{Parameters} containing the previous iteration (so-called \texttt{psy} in the code). Hence, the function \texttt{local} is used to alterate the existing parameters value in those readers. @@ -416,4 +416,4 @@ integRK4 f i y = do \end{code} } -This finishes this chapter, where we incremented the capabilities of the \texttt{CT} type and used it in combination with a brand-new type, the \texttt{Integrator}. Together these types represent the mathematical integral operation. The solver methods are involved within this implementation, and they use an implicit recursion to maintain their sequential behaviour. Also, those abstractions were mapped to FF-GPAC's ideas in order to bring some formalism to the project. However, the used mechanisms, such as implicit recursion and memory manipulation, make it hard to visualize how to execute the project given a description of a physical system. The next chapter, \textit{Execution Walkthrough}, will introduce the \textit{driver} of the simulation and present a step-by-step concrete example. Later on, we will improve the DSL to completely remove all the noise introduced in its use because of such implicit recursion. +This finishes this Chapter, where we incremented the capabilities of the \texttt{CT} type and used it in combination with a brand-new type, the \texttt{Integrator}. Together these types represent the mathematical integral operation. The solver methods are involved within this implementation, and they use an implicit recursion to maintain their sequential behaviour. Also, those abstractions were mapped to FF-GPAC's ideas in order to bring some formalism to the project. However, the used mechanisms, such as implicit recursion and memory manipulation, make it hard to visualize how to execute the project given a description of a physical system. The next Chapter, \textit{Execution Walkthrough}, will introduce the \textit{driver} of the simulation and present a step-by-step concrete example. Later on, we will improve the DSL to completely remove all the noise introduced in its use because of such implicit recursion. diff --git a/doc/MastersThesis/Lhs/Interpolation.lhs b/doc/MastersThesis/Lhs/Interpolation.lhs index b38d73b..31afb32 100644 --- a/doc/MastersThesis/Lhs/Interpolation.lhs +++ b/doc/MastersThesis/Lhs/Interpolation.lhs @@ -25,7 +25,7 @@ iterToTime interv solver n (SolverStage st) = \end{code} } -The previous chapter ended anouncing that drawbacks are present in the current implementation. This chapter will introduce the first concern: numerical methods do not reside in the continuous domain, the one we are actually interested in. After this chapter, this domain issue will be addressed via \textit{interpolation}, with a few tweaks in the integrator and driver. +The previous Chapter ended anouncing that drawbacks are present in the current implementation. This Chapter will introduce the first concern: numerical methods do not reside in the continuous domain, the one we are actually interested in. After this Chapter, this domain issue will be addressed via \textit{interpolation}, with a few tweaks in the integrator and driver. \section{Time Domains} @@ -93,7 +93,7 @@ data Stage = SolverStage Int The type \texttt{Stage} allows values to be either the normal flow of execution, marked by the use of \texttt{SolverStage}, or the indication that an extra step for interpolation needs to be done, marked by the \texttt{Interpolate} tag. Moreover, previous types and functions described in previous chapters, such as \textit{Design Philosophy}, and \textit{Effectful Integrals} need to be adapted to use this new -type instead of the original \texttt{Int} previously proposed (in chapter 2, \textit{Design Philosophy}). Types like \texttt{Parameters} and +type instead of the original \texttt{Int} previously proposed (in Chapter 2, \textit{Design Philosophy}). Types like \texttt{Parameters} and functions like \textit{integEuler}, \textit{iterToTime}, and \textit{runCT} need to be updated accordingly. In all of those instances, processing will just continue normally; \texttt{SolverStage} will be used. @@ -203,8 +203,8 @@ updateInteg integ diff = do \figuraBib{Interpolate}{Linear interpolation is being used to transition us back to the continuous domain.}{}{fig:interpolate}{width=.7\textwidth}% -The last step in this tweak is to add this function into the integrator function \textit{updateInteg}. The code is almost identical to the one presented in chapter 3, \textit{Effectful Integrals}. The main difference is in line 11, where the interpolation function is being applied to \texttt{z}. Figure \ref{fig:diffInterpolate} shows the same visual representation for the \textit{updateInteg} function used in chapter 4, but with the aforementioned modifications. +The last step in this tweak is to add this function into the integrator function \textit{updateInteg}. The code is almost identical to the one presented in Chapter 3, \textit{Effectful Integrals}. The main difference is in line 11, where the interpolation function is being applied to \texttt{z}. Figure \ref{fig:diffInterpolate} shows the same visual representation for the \textit{updateInteg} function used in Chapter 4, but with the aforementioned modifications. \figuraBib{DiffIntegInterpolate}{The new \textit{updateInteg} function add linear interpolation to the pipeline when receiving a parametric record}{}{fig:diffInterpolate}{width=.9\textwidth}% -This concludes the first tweak in \texttt{FACT}. Now, the mismatches between the stop time of the simulation and the time step are being treated differently, going back to the continuous domain thanks to the added interpolation. The next chapter, \textit{Caching the Speed Pill}, goes deep into the program's performance and how this can be fixed with a caching strategy. +This concludes the first tweak in \texttt{FACT}. Now, the mismatches between the stop time of the simulation and the time step are being treated differently, going back to the continuous domain thanks to the added interpolation. The next Chapter, \textit{Caching the Speed Pill}, goes deep into the program's performance and how this can be fixed with a caching strategy. diff --git a/doc/MastersThesis/Lhs/Introduction.lhs b/doc/MastersThesis/Lhs/Introduction.lhs index 2f8e612..6ee70eb 100644 --- a/doc/MastersThesis/Lhs/Introduction.lhs +++ b/doc/MastersThesis/Lhs/Introduction.lhs @@ -20,41 +20,24 @@ Ingo et al. went even further~\cite{Sander2017} by presenting a framework based \section{Contribution} \label{sec:intro} -The aforementioned works --- the formal notion of MoCs, the ForSyDe framework and its interaction with modeling-related tools like Simulink --- comprise the domain of model-based design or \textit{model-based engineering}. Furthermore, the main goal of the present work contribute to this area of CPS by creating a domain-specific language tool (DSL) for simulating continuous-time systems that addresses the absence of a formal basis. Thus, this tool will help to cope with the incompatibility of the mentioned sets of abstractions~\cite{LeeChallenges} --- the discreteness of digital computers with the continuous nature of physical phenomena. +The aforementioned works --- the formal notion of MoCs, the ForSyDe framework and its interaction with modeling-related tools like Simulink --- comprise the domain of model-based design or \textit{model-based engineering}. Furthermore, the main goal of this work is to contribute to this area of CPS by creating a domain-specific language tool (DSL) for simulating continuous-time systems that addresses the absence of a formal basis. Thus, this tool will help to deal with the incompatibility of the mentioned sets of abstractions~\cite{LeeChallenges} --- the discreteness of digital computers with the continuous nature of physical phenomena. The proposed DSL has three special properties of interest: -\begin{itemize} -\item it needs to be a set of well-defined \textit{operational} semantics, thus being \textit{executable}; -\item it needs to be related to a \textit{formalized} process; -\item it should be \textit{concise}; its lack of noise will bring familiarity to the \textit{system's designer} -- the pilot of the DSL which strives to execute a given specification or golden model. -\end{itemize} +\begin{enumerate} +\item it needs to have well-defined \textit{operational} semantics, as well as being a piece of \textit{executable} software; +\item it needs to be related or inspired by a \textit{formal} foundation, moving past \textit{ad-hoc} implementations; +\item it should be \textit{concise}; its lack of noise will bring familiarity to the \textit{system's designer} --- the pilot of the DSL which strives to execute a given specification or golden model. +\end{enumerate} -The first aspect provides \textit{verification via simulation}, a type of verification that is useful when dealing with \textit{non-preserving} semantic transformations, i.e., modifications and tweaks in the model that do not assure that properties are being preserved. Such phenomena are common within the engineering domain, given that a lot of refinement goes into the modeling process in which previous proof-proved properties are not guaranteed to be maintained after iterations with the model. A work-around solution for this problem would be to prove again that the features are in fact present in the new model; an impractical activity when models start to scale in size and complexity. Thus, by using an executable tool as a virtual workbench, models that suffered from those transformations could be extensively tested and verified. +\subsection{Executable Simulation} -In order to address the second property, a solid and formal foundation, the tool is inspired by the general-purpose analog computer (GPAC) formal guidelines, proposed by Shannon~\cite{Shannon} in 1941. This concept was developed to model a Differential Analyzer --- an analog computer composed by a set of interconnected gears and shafts intended to solve numerical problems~\cite{Graca2004}. The mechanical parts represents \textit{physical quantities} and their interaction results in solving differential equations, a common activity in engineering, physics and other branches of science~\cite{Shannon}. The model was based on a set of black boxes, so-called \textit{circuits} or \textit{analog units}, and a set of proved theorems that guarantees that the composition of these units are the minimum necessary to model the system, given some conditions. For instance, if a system is composed by a set of \textit{differentially algebraic} equations with prescribed initial conditions~\cite{Graca2003}, then a GPAC circuit can be built to model it. Later on, some extensions of the original GPAC were developed, going from solving unaddressed problems contained in the original scope of the model~\cite{Graca2003} all the way to make GPAC capable of expressing generable functions, Turing universality and hypertranscendental functions~\cite{Graca2004, Graca2016}. Furthermore, although the analog computer has been forgotten in favor of its digital counterpart~\cite{Graca2003}, recent studies in the development of hybrid systems~\cite{Edil2018} brought GPAC back to the spotlight in the CPS domain. +By making an executable software capable of running continuous time simulations, \textit{verification via simulation} will be available --- a type of verification that is useful when dealing with \textit{non-preserving} semantic transformations, i.e., modifications and tweaks in the model that do not assure that properties are being preserved. Such phenomena are common within the engineering domain, given that a lot of refinement goes into the modeling process in which previous proof-proved properties are not guaranteed to be maintained after iterations with the model. A work-around solution for this problem would be to prove again that the features are in fact present in the new model; an impractical activity when models start to scale in size and complexity. Thus, by using an executable tool as a virtual workbench, models that suffered from those transformations could be extensively tested and verified. -Finally, the third property of interest, conciseness to improve -the DSL's usability, will be assured by the use of the \textit{fixed-point combinator}; a mathematical construct used in the DSL's machinery to hide implementation details noise from the user's perspective, keeping on the surface only the constructs that matter from the designer's point of view. As the dissertation will explain, this happens due to an \textit{abstraction leak} in the original DSL~\cite{Lemos2022}, identified via -an overloaded syntax. -Once the leak is solved, it is expected that the \textit{target audience} --- system's designers with less programming experience but familiar with the system's mathematical description --- will be able to leverage the DSL either when improving the system's description, using the DSL as a refinement tool, or as a way to execute an already specified system. The present work being a direct continuation~\cite{Lemos2022}, it is important to highlight that this final property is the main differentiating factor between the two pieces. - -With these three core properties in mind, the proposed DSL will translate GPAC's original set of black boxes to some executable software leveraging mathematical constructs to simplify its usability. The programming language of choice was \textit{Haskell}, due to a variety of different reasons. First, the approach of making specialized programming languages, or \textit{vocabularies}, within consistent and well-defined host programming languages has already proven to be valuable, as noted by Landin~\cite{Landin1966}. Second, this strategy is already being used in the CPS domain in some degree, as showed by the ForSyDe framework~\cite{Sander2017, Seyed2020}. Third, Lee describes a lot of properties~\cite{LeeModeling} that matches the functional programming paradigm almost perfectly: - -\begin{itemize} - \item Prevent misconnected MoCs by using great interfaces in between $\Rightarrow$ Such interfaces can be built using Haskell's \textit{strong type system} - \item Enable composition of MoCs $\Rightarrow$ Composition is a first-class feature in functional programming languages - \item It should be possible to conjoin a functional model with an implementation model $\Rightarrow$ Functions programming languages makes a clear the separation between the \textit{denotational} aspect of the program, i.e., its meaning, from the \textit{operational} functionality - \item All too often the semantics emerge accidentally from the software implementation rather than being built-in from the start $\Rightarrow$ A denotative approach with no regard for implementation details is common in the functional paradigm - \item The challenge is to define MoCs that are sufficiently expressive and have strong formal properties that enable systematic validation of designs and correct-by-construction synthesis of implementations $\Rightarrow$ Functional languages are commonly used for formal mathematical applications, such as proof of theorems and properties, as well as also being known for "correct-by-construction" approaches -\end{itemize} - -The recognition that the functional paradigm (FP) provides better well-defined, mathematical and rigourous abstractions has been by Backus~\cite{Backus1978} in his Turing Award lecture; where he argued that FP is the path to liberate computing from the limitations of the \textit{von Neumann style} when thinking about systems. -Thus, continuous time being specified in mathematical terms, we believe that the use of functional programming for modeling continuous time is not a coincidence; properties that are established as fundamental to leverage better abstractions for CPS simulation seem to be within or better described in the functional programming paradigm. Furthermore, this implementation is based on \texttt{Aivika}~\footnote{\texttt{Aivika} \href{https://github.com/dsorokin/aivika}{\textcolor{blue}{source code}}.} --- an open source multi-method library for simulating a variety of paradigms, including partial support for physical dynamics, written in Haskell. Our version is modified for our needs, such as demonstrating similarities between the implementation and GPAC, shrinking some functionality in favor of focusing on continuous time modeling, and re-thinking the overall organization of the project for better understanding, alongside code refactoring using other Haskell's abstractions. So, this reduced and refactored version of \texttt{Aivika}, so-called \texttt{FACT}~\footnote{\texttt{FACT} \href{https://github.com/FP-Modeling/fact/releases/tag/3.0}{\textcolor{blue}{source code}}.}, will be a Haskell Embedded Domain-Specific Language (HEDSL) within the model-based engineering domain. The built DSL will explore Haskell's specific features and details, such as the type system and typeclasses, to solve differential equations. Figure \ref{fig:introExample} shows a side-by-side comparison between the original implementation of Lorenz Attractor in FACT, presented in~\cite{Lemos2022}, and its final form for the same physical system. +Furthermore, this implementation is based on \texttt{Aivika}~\footnote{\texttt{Aivika} \href{https://github.com/dsorokin/aivika}{\textcolor{blue}{source code}}.} --- an open source multi-method library for simulating a variety of paradigms, including partial support for physical dynamics, written in Haskell. Our version is modified for our needs, such as demonstrating similarities between the implementation and GPAC, shrinking some functionality in favor of focusing on continuous time modeling, and re-thinking the overall organization of the project for better understanding, alongside code refactoring using other Haskell's abstractions. So, this reduced and refactored version of \texttt{Aivika}, so-called \texttt{FACT}~\footnote{\texttt{FACT} \href{https://github.com/FP-Modeling/fact/releases/tag/3.0}{\textcolor{blue}{source code}}.}, will be a Haskell Embedded Domain-Specific Language (HEDSL) within the model-based engineering domain. The built DSL will explore Haskell's specific features and details, such as the type system and typeclasses, to solve differential equations. Figure \ref{fig:introExample} shows a side-by-side comparison between the original implementation of Lorenz Attractor in FACT, presented in~\cite{Lemos2022}, and its final form, so-called FFACT, for the same physical system. \begin{figure}[ht!] \begin{minipage}{0.45\linewidth} -% \vspace{-0.8cm} \begin{purespec} -- Original version of FACT lorenzModel = do @@ -90,9 +73,181 @@ Thus, continuous time being specified in mathematical terms, we believe that th \label{fig:introExample} \end{figure} +\subsection{Formal Foundation} + +The tool is inspired by the general-purpose analog computer (GPAC) formal guidelines, proposed by Shannon~\cite{Shannon} in 1941, as an inspiration for a solid and formal foundation. This concept was developed to model a Differential Analyzer --- an analog computer composed by a set of interconnected gears and shafts intended to solve numerical problems~\cite{Graca2004}. The mechanical parts represents \textit{physical quantities} and their interaction results in solving differential equations, a common activity in engineering, physics and other branches of science~\cite{Shannon}. The model was based on a set of black boxes, so-called \textit{circuits} or \textit{analog units}, and a set of proved theorems that guarantees that the composition of these units are the minimum necessary to model the system, given some conditions. For instance, if a system is composed by a set of \textit{differentially algebraic} equations with prescribed initial conditions~\cite{Graca2003}, then a GPAC circuit can be built to model it. Later on, some extensions of the original GPAC were developed, going from solving unaddressed problems contained in the original scope of the model~\cite{Graca2003} all the way to make GPAC capable of expressing generable functions, Turing universality and hypertranscendental functions~\cite{Graca2004, Graca2016}. Furthermore, although the analog computer has been forgotten in favor of its digital counterpart~\cite{Graca2003}, recent studies in the development of hybrid systems~\cite{Edil2018} brought GPAC back to the spotlight in the CPS domain. + +The HEDSL will translate GPAC's original set of black boxes to some executable software leveraging mathematical constructs to simplify its usability. The programming language of choice was \textit{Haskell} --- a well known language in the functional paradigm (FP). The recognition that such paradigm provides better well-defined, mathematical and rigourous abstractions has been proposed by Backus~\cite{Backus1978} in his Turing Award lecture; where he argued that FP is the path to liberate computing from the limitations of the \textit{von Neumann style} when thinking about systems. Thus, continuous time being specified in mathematical terms, we believe that the use of functional programming for modeling continuous time is not a coincidence; properties that are established as fundamental to leverage better abstractions for CPS simulation seem to be within or better described in FP. +Lee describes a lot of properties~\cite{LeeModeling} that matches this programming +paradigm almost perfectly: + +\begin{enumerate} + \item Prevent misconnected MoCs by using great interfaces in between $\Rightarrow$ Such interfaces can be built using Haskell's \textit{strong type system} + \item Enable composition of MoCs $\Rightarrow$ Composition is a first-class feature in functional programming languages + \item It should be possible to conjoin a functional model with an implementation model $\Rightarrow$ Functions programming languages makes a clear the separation between the \textit{denotational} aspect of the program, i.e., its meaning, from the \textit{operational} functionality + \item All too often the semantics emerge accidentally from the software implementation rather than being built-in from the start $\Rightarrow$ A denotative approach with no regard for implementation details is common in the functional paradigm + \item The challenge is to define MoCs that are sufficiently expressive and have strong formal properties that enable systematic validation of designs and correct-by-construction synthesis of implementations $\Rightarrow$ Functional languages are commonly used for formal mathematical applications, such as proof of theorems and properties, as well as also being known for "correct-by-construction" approaches +\end{enumerate} + +In terms of the DSL being \textit{embedded} in Haskell, this approach of making specialized programming languages, or \textit{vocabularies}, within consistent and well-defined host programming languages, has already proven to be valuable, as noted by Landin~\cite{Landin1966}. Further, this strategy is already being used in the CPS domain in some degree, as showed by the ForSyDe framework~\cite{Sander2017, Seyed2020}. + +\subsection{Conciseness} + +Finally, conciseness to improve the DSL's usability will be assured by the use of the \textit{fixed-point combinator}; a mathematical construct used in the DSL's machinery to hide implementation details noise from the user's perspective, keeping on the surface only the constructs that matter from the designer's point of view. As the dissertation will explain, this happens due to an \textit{abstraction leak} in the original DSL~\cite{Lemos2022}, identified via +an overloaded syntax. Once the leak is solved, it is expected that the \textit{target audience} --- system's designers with less programming experience but familiar with the system's mathematical description --- will be able to leverage the DSL either when improving the system's description, using the DSL as a refinement tool, or as a way to execute an already specified system. Given that the present work, FFACT, being a direct continuation of FACT~\cite{Lemos2022}, it is important to highlight that this final property is the main differentiating factor between the two pieces. + +When comparing models in FFACT to other implementations in other ecosystems and programming languages, FFACT's conciseness brings more familiarity, i.e., +one using the HEDSL needs less knowledge about the host programming language, Haskell in our case, \textit{and} one can more easily bridge the gap between a mathematical +description of the problem and its analogous written in FFACT, due to less syntatical burden and noise from a user's perpective. Figures~\ref{fig:lorenz-simulink}, +~\ref{fig:lorenz-matlab},~\ref{fig:lorenz-python},~\ref{fig:lorenz-mathematica}, and~\ref{fig:lorenz-yampa} show some comparisons +between the same Lorenz Attractor model in different tecnologies. It is worth noting that these examples only show \textit{the system's description}, i.e., the \textit{drivers} of the simulations +are being omitted. + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} + \begin{minipage}{0.5\linewidth} + \centering + \includegraphics[width=0.95\linewidth]{MastersThesis/img/lorenzSimulink} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation.} +\label{fig:lorenz-simulink} +\end{figure} + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \begin{minipage}{0.54\linewidth} + \begin{matlab} + sigma = 10; + beta = 8/3; + rho = 28; + f = @(t,vars) + [sigma*(vars(2) - vars(1)); + vars(1)*(rho - vars(3)) - vars(2); + vars(1)*vars(2) - beta*vars(3)]; + [t,vars] = ode45(f,[0 100],[1 1 1]; + \end{matlab} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.} +\label{fig:lorenz-matlab} +\end{figure} + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \begin{minipage}{0.54\linewidth} + \begin{python} + def lorenzModel(x, y, z): + sigma = 10 + rho = 28 + beta = 8/3 + x_dot = sigma*(y - x) + y_dot = rho*x - y - x*z + z_dot = x*y - beta*z + return np.array([x_dot, y_dot, z_dot]) + \end{python} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Python implementation.} +\label{fig:lorenz-python} +\end{figure} + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \begin{minipage}{0.54\linewidth} + \begin{mathematica} + lorenzModel = NonlinearStateSpaceModel[ + {{sigma (y - x), + x (rho - z) - y, + x y - beta z}, {}}, + {x, y, z}, + {sigma, rho, beta}]; + soln[t_] = StateResponse[ + {lorenzModel, {10, 10, 10}}, + {10, 28, 8/3}, + {t, 0, 50}]; + \end{mathematica} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Mathematica implementation.} +\label{fig:lorenz-mathematica} +\end{figure} + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \hspace{-2.4cm} + \begin{minipage}{0.64\linewidth} + \begin{purespec} + lorenzModel = proc () -> do + rec x <- pre >>> imIntegral 1.0 -< sigma*(y - x) + y <- pre >>> imIntegral 1.0 -< x*(rho - z) - y + z <- pre >>> imIntegral 1.0 -< (x*y) - (beta*z) + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + returnA -< (x, y, z) + \end{purespec} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation (also in Haskell).} +\label{fig:lorenz-yampa} +\end{figure} + +\newpage + \section{Outline} -This thesis is a step in a broader story, started in 2018 by Edil Medeiros et al.~\cite{Edil2018}. Medeiros' work had some limitations, such as having difficulty +This dissertation is a step in a broader story, started in 2018 by Edil Medeiros et al.~\cite{Edil2018}. Medeiros' work had some limitations, such as having difficulty modeling systems via explicit signal manipulation, and later publications addressed this issue~\cite{Lemos2022, EdilLemos2023}. The chapters in this work encompass the previous milestones from this story, giving the reader a complete overview from the ground up in this research thread. @@ -100,8 +255,10 @@ Chapter 2, \textit{Design Philosophy}, presents the foundation of this work, sta original work and this work are far apart, the mathematical base is the same. Chapters 3 to 6 describe future improvements made in 2022~\cite{Lemos2022} and 2023~\cite{EdilLemos2023}. These chapters go in detail about the DSL's implementation details, such as the used abstractions, going through executable examples, pointing out and addressing problems in its usability and design. Issues like performance, and continuous time implementation are explained -and then addressed. Whilst the implementation of Chapters 2 to 6 were vastly improved during the making of this dissertation, the latest inclusion to this research is +and then addressed. Whilst the implementation of Chapters 2 to 6 were vastly improved during the making of this dissertation, alongside improvements +on the writing of their respective chapters, +the latest inclusion to this research is concentrated in Chapter 7, \textit{Fixing Recursion}, which dedicates itself to improving an abstraction leak in the most recent published version of the DSL~\cite{EdilLemos2023}. Those improvements leverage the \textit{fixed point combinator} to eliminate abstraction leaks, thus making the DSL more concise and familiar to a system's designer. -These enhacements were submitted and are waiting approval in a related journal~\footnote{\href{https://www.cambridge.org/core/journals/journal-of-functional-programming}{\textcolor{blue}{Journal of Functional Programming}}.}. Finally, limitations, future improvements and final thoughts are drawn in chapter 8, \textit{Conclusion}. +These enhacements were submitted and are waiting approval in a related journal~\footnote{\href{https://www.cambridge.org/core/journals/journal-of-functional-programming}{\textcolor{blue}{Journal of Functional Programming}}.}. Finally, limitations, future improvements and final thoughts are drawn in Chapter 8, \textit{Conclusion}. diff --git a/doc/MastersThesis/bibliography.bib b/doc/MastersThesis/bibliography.bib index 9181212..f416efe 100644 --- a/doc/MastersThesis/bibliography.bib +++ b/doc/MastersThesis/bibliography.bib @@ -438,4 +438,14 @@ @book{knuth1992 isbn = {0937073806}, publisher = {Center for the Study of Language and Information}, address = {USA} +} + +@article{Simulink, +author = {Ekhande, Rahul}, +year = {2014}, +month = {01}, +title = {Chaotic Signal for Signal Masking in Digital Communications}, +volume = {4}, +journal = {IOSR Journal of Engineering}, +doi = {10.9790/3021-04252933} } \ No newline at end of file diff --git a/doc/MastersThesis/img/lorenzSimulink.pdf b/doc/MastersThesis/img/lorenzSimulink.pdf new file mode 100644 index 0000000000000000000000000000000000000000..92d3ec07c6ce11b8f6fda952b5610e6c6e951f75 GIT binary patch literal 7225 zcmd^Ec|26>|2HN}V;dx)Q0tQAwyA#vK3iM zU z5J&*@xawhd@SAB=uPtOcK>8E0BYXRKm{a!xJ2hcUM~?syiQ0%KG-~kQ!PFo+Y&QfT zA{yX`DWQR$00GH}!w+5~s|#NLMh^0&QvjU>N(h}s_6Wr6!63#Eqp(5l-765obr+;! z>@i5{=9%+JL=+%8wFwJpMWR5Eod6!L4S;tf@1-LV!zHnRvpFxPJI{hM^SvTSN){aa zR+0InVenf*J?Y#!aus3l8$ANyqQZH^;H~@u=wupPFuYZO2b~O(y{O(~q;A{&ypa&~ zV9t87pD%@u1cj6{fJ)mE?BPWQ2uP%G!p!~XA)CoG3u<66H3+Go9&FLhBV;!~_?993 ze7^L#eAwgpEbQ$X%{rWsuPSUgHJ3A4T-Bkf+N3T2!sJpx!9_>Cq#NSmmjf*X<>RsD z|MHK-af>P0+OneLJA;RJS%RJ)EQ@hC!W3@1IOF6##wBR|Z*@&u!PQD`gQ_@*R6I>e zJn5cvRWkHnrL!|mvaSha)(MG2I9dr+S_(TMg-xPX*2S*hS2n>1zC3*y4@mR{M57p= zeclQ|cXW5UnXazu{KAWS|L}?QKT*fANx$8f!2`o;+dqVv4+S8Ie6DBkV9#WjN84c&9c$ul?Qo`-5@KNsEzb<^AIs%DVX!>I&#}IfLfZ zwx&6;u{sduYYGx?sLy$l#y8w%UZOYOT~G+V9TaCMXSV}3B~ z>r%_dZ94^E*A1pF$10a?21=o*Pm3svb=L^^ux?6C7V_?C5o7h)MIoZoHK(g)kw4I8X@XyT{;^S(0D)*6PI_7+uR>me2U@&Bdt!vQDP>@(DPSw#pD+GkU-#u}T7#aO+)k?X3*p2kr=Zu~Nr?7D*K6_19x8omtjJu6$g6$-L` za0E^1BxGCwhIJyW0}q(kUuj%vaeF59c7FM(_Ff66gY3xUoU55fL~3JG3TLRU%+DHe zDfmnGx>|*vK^bxEnvwUVnMKGd#SA(g?F+Qg)Rl3S(rI=>1l^|0GlrWv^5));sKL+aU0|!QDRcq4X}~D zg|TprTyI)btOX~u#SH!eDTC3Az?88Af>34;S{$o$LjVW(hpkfq>vyg=<7RVZRn}HC zR;PNL*M2mMUqnCj5980hDL&~sJob6hCfKb$1&RexYJ@;Vl-9#J zSg5+LL*Mw4(cyR5vTrAnV%Oh_HUx`{&So~>uqxW^xLdV|2>@$AZ@LLa62+)u_9vZ58hu~EBs$;wzeH7V$z z$RonOk*{jwS@x}b_WX~d2Pn>OOzG>kM!7ceGLsb1Ed9=gDnt4H z-A<;&t9JkcLj$qo{i|9fI)$!Zn_db%5?Uz_InSGnWoyME}w(2?y`jO9gvX18kZ_lZL1%g?^3<6(Y1vTQ>ZN{)Z6^7U}O(mY(s z5r67vaUU3lEu&fg`Y+>rf{CQHbydkOG3bjk_FiS?bn`@PVF@2F>;}ilcH&{qrpw(s zko}*4^*i-idJYuEyt1MJf&I&QdB5??6GB$M|4MGBX2}D`uLX;)N88Uerwp35H4HQ~ z-k(SriGCuxW2{3O(DKTQvK@3LH21tQ^=`D{m@v9!cPapxkE}Rl+MMnFwYd+@aToS5 zqig+-#8M38I*&GUoN@{(!9HGw3_SCnbV}@gz5>}zM+<{;wz>}%Q;|J1z`A0(Xqkej z@8iyi9YH>0V!BhALT5qH^P6Nk#zT}4fbrQ8Vj%l7r%dC_;~ zy@@@uzL1<}0jS$1!eVO`tn7ZAf0(78^x4rdHzVti2 z;JPupRw0-(xdCd&wZ<3v?Qtb?ZZ!@VV^tD2*JbNNV{TMouuDFz#8-^z+i#jLYd#g3 z0VE$a-@gR6Zmf}@z$q;vwKq7>fr~pRSmyo$;NJW3uLmOUX-w zld$whj~*qs=CD6U?~}cpiwEt(K_4N=#)cug=EKr*FgHS&H9^;`wU5OrS2+cb zpdeNAlf2BMm~v1|%+zsNgRcCST3+Tx`F-Nd87XWirauZhRl?p8zW5ditalj>&xwn= z<{z`QSG*JZknW6TPS+$g6`dkM&0|nNk`GCpmzA2^H#Z#`cmHl&gK$LN*{8R70aalS zQAlHZ#hE_OT^hI2$&?3t&rX#Kf@6`qp>(Ip)bmQvr-FkIR@*_5cEpshBK)fBqb&Zd z7LM1hkTT!o3)*|XO74-JebMEn-;z^fVsBP6HZ>et;g7Pn#Epq@#(5Ywl^%u!Ddt^0 zFOID`an0EHT0oo8w40XfZihtnlj#dpZ>{EdNACNN>ZpJUMe^7xb~g!sp*ZTCYu*Lh zD3`G!8#!iokctuBw*ASTm{y6!@vYK0>4Dk^E3p(l@}03{XQq<&Oy%dXl{yz+PcloH8S1f3Cu0S5 zgrLo3CSvyVN0+_N9e(o>8^JkteE==(Dw4nlo{rM|G#!)kd3<;r`U=IepNNLmDtu7j zWsXiRPaIInAI)D3x$=SgODHi>y|w(H3@sItL@QX;VxZ@H#LBVX&IWw4{fdrYK3||_`plURwCG3TF+TVO@}0u z(UBqxn7{Im@s0?yo2lf%d3oNE-Fj(+vC>#|MKcFDeFbN-a!oQ{lqNFu)jNc%WM`XA0NGrdp>1e>rZ_K- zg%OZCw-%7N9K6}S`Cm}bF0ZqBaMPRkSjsj|ht1N2TZ7+!93=KH zhZ?2UZQ3aL=m2HcUx70DOzVqN+74-@vp$&Dd|K3#o^Xj{*`IB)EpIQ8#0IVedmqL; zSI@=+T^d+@0q`!o+-cF>iGGedXGRW*KBhPiI;0 zEl$*SX6hK%i`=X`a$7X3Pz-FOQ{`6gWDYwxlTHUFK49w$+8bbj2ZV!V9YM8Km9R-~EoGy>%NP%3#aGRQ-+$BLc5ENMAz~dl0#-sB7s5Y`S2G?1FHVQ1XkhQ{ zA#{tLVugr;^_91Gn{^Ev-mrSwSa@vce0mCn)S_|qLB;a$g9&51BJVCeYJ_4CYR&0A z9isx;d@X@$a;$)Cp`AF%ei*N6#|?3k4AvzU)DJasWy_}2Cu{b_@)J+@dA8gapqL)| z>a$824(-9`P>pUOtW2tSeX;jv9_Yxu_`Zg#!Ei|E{m1ICSi9kf>ux26B4BFd$`Q_& z0Q%x@0UfL53^^Mj%V+!kxX$IB?7P{DLy+J5Ua*PC0P6Fi&CYHnh06r|<}wHG7_u)o zS{n4{$zi{3vuPor;5>4o>GCy84<6Ejn&<3<2>VIn2+CY)$is_;c;YZtFiZ;J&xt z;L*QqzRkDW2;VxXKla;hi1F~e;Cb!v_Ze~7MhqyoAzJz z9ql~eQ2rt8|E=%%V@ChB`M3~2w9R}MZiFbkzqsI-MJNU(Y?9Z{nRUoc%8Q3unJYatA-Dwx1E zi@1rXM&?LZNVrH?NO%kG>B6Qg;FAIR|B)^DD8xI|3-0F6*XO?)F=gij3sWzaxN zQ%grvTT5F@lStGds9^ARRPX<^_I*FuOC$SWGy#Gh22T9^2O#O_XzKtzz@HehnIaz` z=sTtZ@47!>1Oj{p@)HKv3Ln^gKdbSe(Ya+J>FSX%ii(z-tT6uraD@%G literal 0 HcmV?d00001 diff --git a/doc/MastersThesis/thesis.lhs b/doc/MastersThesis/thesis.lhs index 163eb74..c0db057 100644 --- a/doc/MastersThesis/thesis.lhs +++ b/doc/MastersThesis/thesis.lhs @@ -32,6 +32,9 @@ \newminted[code]{haskell}{breaklines,autogobble,linenos=true, numberblanklines=false, fontsize=\footnotesize} \newminted[spec]{haskell}{breaklines,autogobble,linenos=true, numberblanklines=false, fontsize=\footnotesize} \newminted[purespec]{haskell}{breaklines,autogobble,linenos=false, numberblanklines=false, fontsize=\footnotesize} +\newminted[matlab]{matlab}{breaklines,autogobble,linenos=false, numberblanklines=false, fontsize=\footnotesize} +\newminted[python]{python}{breaklines,autogobble,linenos=false, numberblanklines=false, fontsize=\footnotesize} +\newminted[mathematica]{mathematica}{breaklines,autogobble,linenos=false, numberblanklines=false, fontsize=\footnotesize} \orientador{\prof Eduardo Peixoto}{CIC/UnB}% \coorientador{\prof José Edil Guimarães}{ENE/UnB}% diff --git a/doc/MastersThesis/thesis.lof b/doc/MastersThesis/thesis.lof index aecc6d9..e0147e7 100644 --- a/doc/MastersThesis/thesis.lof +++ b/doc/MastersThesis/thesis.lof @@ -2,58 +2,63 @@ \babel@toc {american}{}\relax \babel@toc {american}{}\relax \addvspace {10\p@ } -\contentsline {figure}{\numberline {1.1}{\ignorespaces The translation between the world of software and the mathematical description of differential equations are explicit in the final version of \texttt {FACT}.}}{5}{figure.caption.8}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {2.1}{\ignorespaces The combination of these four basic units compose any GPAC circuit (taken from~\cite {Edil2018} with permission).}}{8}{figure.caption.9}% -\contentsline {figure}{\numberline {2.2}{\ignorespaces Polynomial circuits resembles combinational circuits, in which the circuit respond instantly to changes on its inputs (taken from~\cite {Edil2018} with permission).}}{9}{figure.caption.10}% -\contentsline {figure}{\numberline {2.3}{\ignorespaces Types are not just labels; they enhance the manipulated data with new information. Their difference in shape can work as the interface for the data.}}{10}{figure.caption.11}% -\contentsline {figure}{\numberline {2.4}{\ignorespaces Functions' signatures are contracts; they purespecify which shape the input information has as well as which shape the output information will have.}}{10}{figure.caption.11}% -\contentsline {figure}{\numberline {2.5}{\ignorespaces Sum types can be understood in terms of sets, in which the members of the set are available candidates for the outer shell type. Parity and possible values in digital states are examples.}}{11}{figure.caption.12}% -\contentsline {figure}{\numberline {2.6}{\ignorespaces Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textit {record} alongside with the constructor, where the labels for each member inside it are explicit.}}{11}{figure.caption.13}% -\contentsline {figure}{\numberline {2.7}{\ignorespaces Depending on the application, different representations of the same structure need to used due to the domain of interest and/or memory constraints.}}{12}{figure.caption.14}% -\contentsline {figure}{\numberline {2.8}{\ignorespaces The minimum requirement for the \texttt {Ord} typeclass is the $<=$ operator, meaning that the functions $<$, $<=$, $>$, $>=$, \texttt {max} and \texttt {min} are now unlocked for the type \texttt {ClockTime} after the implementation. Typeclasses can be viewed as a third dimension in a type.}}{12}{figure.caption.15}% -\contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above is common.}}{13}{figure.caption.16}% -\contentsline {figure}{\numberline {2.10}{\ignorespaces The initial value is used as a starting point for the procedure. The algorithm continues until the time of interest is reached in the unknown function. Due to its large time step, the final answer is really far-off from the expected result.}}{15}{figure.caption.17}% -\contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textit {function}, in which providing a floating point value as time returns another value as outcome.}}{15}{figure.caption.18}% -\contentsline {figure}{\numberline {2.12}{\ignorespaces The \texttt {Parameters} type represents a given moment in time, carrying over all the necessary information to execute a solver step until the time limit is reached. Some useful typeclasses are being derived to these types, given that Haskell is capable of inferring the implementation of typeclasses in simple cases.}}{16}{figure.caption.19}% -\contentsline {figure}{\numberline {2.13}{\ignorespaces The \texttt {CT} type is a function of from time related information to an arbitrary potentially effectful outcome value.}}{17}{figure.caption.20}% -\contentsline {figure}{\numberline {2.14}{\ignorespaces The \texttt {CT} type can leverage monad transformers in Haskell via \texttt {Reader} in combination with \texttt {IO}.}}{17}{figure.caption.21}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {3.1}{\ignorespaces Given a parametric record \texttt {ps} and a dynamic value \texttt {da}, the \textit {fmap} functor of the \texttt {CT} type applies the former to the latter. Because the final result is wrapped inside the \texttt {IO} shell, a second \textit {fmap} is necessary.}}{19}{figure.caption.22}% -\contentsline {figure}{\numberline {3.2}{\ignorespaces With the \texttt {Applicative} typeclass, it is possible to cope with functions inside the \texttt {CT} type. Again, the \textit {fmap} from \texttt {IO} is being used in the implementation.}}{20}{figure.caption.23}% -\contentsline {figure}{\numberline {3.3}{\ignorespaces The $>>=$ operator used in the implementation is the \textit {bind} from the \texttt {IO} shell. This indicates that when dealing with monads within monads, it is frequent to use the implementation of the internal members.}}{21}{figure.caption.24}% -\contentsline {figure}{\numberline {3.4}{\ignorespaces The typeclass \texttt {MonadIO} transforms a given value wrapped in \texttt {IO} into a different monad. In this case, the parameter \texttt {m} of the function is the output of the \texttt {CT} type.}}{21}{figure.caption.25}% -\contentsline {figure}{\numberline {3.5}{\ignorespaces The ability of lifting numerical values to the \texttt {CT} type resembles three FF-GPAC analog circuits: \texttt {Constant}, \texttt {Adder} and \texttt {Multiplier}.}}{22}{figure.caption.26}% -\contentsline {figure}{\numberline {3.6}{\ignorespaces State Machines are a common abstraction in computer science due to its easy mapping between function calls and states. Memory regions and peripherals are embedded with the idea of a state, not only pure functions. Further, side effects can even act as the trigger to move from one state to another, meaning that executing a simple function can do more than return a value. Its internal guts can significantly modify the state machine.}}{23}{figure.caption.27}% -\contentsline {figure}{\numberline {3.7}{\ignorespaces The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt {CT} and \texttt {Integrator} types match the four basic units.}}{28}{figure.caption.28}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {4.1}{\ignorespaces The integrator functions are essential to create and interconnect combinational and feedback-dependent circuits.}}{32}{figure.caption.29}% -\contentsline {figure}{\numberline {4.2}{\ignorespaces The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description.}}{32}{figure.caption.30}% -\contentsline {figure}{\numberline {4.3}{\ignorespaces Because the list implements the \texttt {Traversable} typeclass, it allows this type to use the \textit {traverse} and \textit {sequence} functions, in which both are related to changing the internal behaviour of the nested structures.}}{33}{figure.caption.31}% -\contentsline {figure}{\numberline {4.4}{\ignorespaces A \textit {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{33}{figure.caption.32}% -\contentsline {figure}{\numberline {4.5}{\ignorespaces When building a model for simulation, the above pipeline is always used, from both points of view. The operations with meaning, i.e., the ones in the \texttt {Semantics} pipeline, are mapped to executable operations in the \texttt {Operational} pipeline, and vice-versa.}}{34}{figure.caption.33}% -\contentsline {figure}{\numberline {4.6}{\ignorespaces Using only FF-GPAC's basic units and their composition rules, it's possible to model the Lorenz Attractor example.}}{37}{figure.caption.34}% -\contentsline {figure}{\numberline {4.7}{\ignorespaces After \textit {createInteg}, this record is the final image of the integrator. The function \textit {initialize} gives us protecting against wrong records of the type \texttt {Parameters}, assuring it begins from the first iteration, i.e., $t_0$.}}{38}{figure.caption.35}% -\contentsline {figure}{\numberline {4.8}{\ignorespaces After \textit {readInteg}, the final floating point values is obtained by reading from memory a computation and passing to it the received parameters record. The result of this application, $v$, is the returned value.}}{39}{figure.caption.36}% -\contentsline {figure}{\numberline {4.9}{\ignorespaces The \textit {updateInteg} function only does side effects, meaning that only affects memory. The internal variable \texttt {c} is a pointer to the computation \textit {itself}, i.e., the computation being created references this exact procedure.}}{39}{figure.caption.37}% -\contentsline {figure}{\numberline {4.10}{\ignorespaces After setting up the environment, this is the final depiction of an independent variable. The reader $x$ reads the values computed by the procedure stored in memory, a second-order Runge-Kutta method in this case.}}{40}{figure.caption.38}% -\contentsline {figure}{\numberline {4.11}{\ignorespaces The Lorenz's Attractor example has a very famous butterfly shape from certain angles and constant values in the graph generated by the solution of the differential equations..}}{41}{figure.caption.39}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {5.1}{\ignorespaces During simulation, functions change the time domain to the one that better fits certain entities, such as the \texttt {Solver} and the driver. The image is heavily inspired by a figure in~\cite {Edil2017}.}}{42}{figure.caption.40}% -\contentsline {figure}{\numberline {5.2}{\ignorespaces Updated auxiliary types for the \texttt {Parameters} type.}}{44}{figure.caption.41}% -\contentsline {figure}{\numberline {5.3}{\ignorespaces Linear interpolation is being used to transition us back to the continuous domain..}}{47}{figure.caption.42}% -\contentsline {figure}{\numberline {5.4}{\ignorespaces The new \textit {updateInteg} function add linear interpolation to the pipeline when receiving a parametric record.}}{48}{figure.caption.43}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {6.1}{\ignorespaces With just a few iterations, the exponential behaviour of the implementation is already noticeable.}}{50}{figure.caption.45}% -\contentsline {figure}{\numberline {6.2}{\ignorespaces The new \textit {createInteg} function relies on interpolation composed with memoization. Also, this combination \textit {produces} results from the computation located in a different memory region, the one pointed by the \texttt {computation} pointer in the integrator.}}{56}{figure.caption.47}% -\contentsline {figure}{\numberline {6.3}{\ignorespaces The function \textit {reads} information from the caching pointer, rather than the pointer where the solvers compute the results.}}{57}{figure.caption.48}% -\contentsline {figure}{\numberline {6.4}{\ignorespaces The new \textit {updateInteg} function gives to the solver functions access to the region with the cached data.}}{58}{figure.caption.49}% -\contentsline {figure}{\numberline {6.5}{\ignorespaces Caching changes the direction of walking through the iteration axis. It also removes an entire pass through the previous iterations.}}{59}{figure.caption.50}% -\contentsline {figure}{\numberline {6.6}{\ignorespaces By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation.}}{63}{figure.caption.53}% -\addvspace {10\p@ } -\contentsline {figure}{\numberline {7.1}{\ignorespaces Resettable counter in hardware, inspired by Levent's works~\cite {levent2000, levent2002}.}}{68}{figure.caption.54}% -\contentsline {figure}{\numberline {7.2}{\ignorespaces Diagram of \texttt {createInteg} primitive for intuition..}}{70}{figure.caption.55}% -\contentsline {figure}{\numberline {7.3}{\ignorespaces Results of FFACT are similar to the final version of FACT..}}{73}{figure.caption.56}% +\contentsline {figure}{\numberline {1.1}{\ignorespaces The translation between the world of software and the mathematical description of differential equations are explicit in the final version of \texttt {FACT}.}}{4}{figure.caption.8}% +\contentsline {figure}{\numberline {1.2}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation.}}{6}{figure.caption.9}% +\contentsline {figure}{\numberline {1.3}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.}}{7}{figure.caption.10}% +\contentsline {figure}{\numberline {1.4}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Python implementation.}}{7}{figure.caption.11}% +\contentsline {figure}{\numberline {1.5}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Mathematica implementation.}}{7}{figure.caption.12}% +\contentsline {figure}{\numberline {1.6}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation (also in Haskell).}}{7}{figure.caption.13}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {2.1}{\ignorespaces The combination of these four basic units compose any GPAC circuit (taken from~\cite {Edil2018} with permission).}}{10}{figure.caption.14}% +\contentsline {figure}{\numberline {2.2}{\ignorespaces Polynomial circuits resembles combinational circuits, in which the circuit respond instantly to changes on its inputs (taken from~\cite {Edil2018} with permission).}}{11}{figure.caption.15}% +\contentsline {figure}{\numberline {2.3}{\ignorespaces Types are not just labels; they enhance the manipulated data with new information. Their difference in shape can work as the interface for the data.}}{12}{figure.caption.16}% +\contentsline {figure}{\numberline {2.4}{\ignorespaces Functions' signatures are contracts; they purespecify which shape the input information has as well as which shape the output information will have.}}{12}{figure.caption.16}% +\contentsline {figure}{\numberline {2.5}{\ignorespaces Sum types can be understood in terms of sets, in which the members of the set are available candidates for the outer shell type. Parity and possible values in digital states are examples.}}{13}{figure.caption.17}% +\contentsline {figure}{\numberline {2.6}{\ignorespaces Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textit {record} alongside with the constructor, where the labels for each member inside it are explicit.}}{13}{figure.caption.18}% +\contentsline {figure}{\numberline {2.7}{\ignorespaces Depending on the application, different representations of the same structure need to used due to the domain of interest and/or memory constraints.}}{14}{figure.caption.19}% +\contentsline {figure}{\numberline {2.8}{\ignorespaces The minimum requirement for the \texttt {Ord} typeclass is the $<=$ operator, meaning that the functions $<$, $<=$, $>$, $>=$, \texttt {max} and \texttt {min} are now unlocked for the type \texttt {ClockTime} after the implementation. Typeclasses can be viewed as a third dimension in a type.}}{14}{figure.caption.20}% +\contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above is common.}}{15}{figure.caption.21}% +\contentsline {figure}{\numberline {2.10}{\ignorespaces The initial value is used as a starting point for the procedure. The algorithm continues until the time of interest is reached in the unknown function. Due to its large time step, the final answer is really far-off from the expected result.}}{17}{figure.caption.22}% +\contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textit {function}, in which providing a floating point value as time returns another value as outcome.}}{17}{figure.caption.23}% +\contentsline {figure}{\numberline {2.12}{\ignorespaces The \texttt {Parameters} type represents a given moment in time, carrying over all the necessary information to execute a solver step until the time limit is reached. Some useful typeclasses are being derived to these types, given that Haskell is capable of inferring the implementation of typeclasses in simple cases.}}{18}{figure.caption.24}% +\contentsline {figure}{\numberline {2.13}{\ignorespaces The \texttt {CT} type is a function of from time related information to an arbitrary potentially effectful outcome value.}}{19}{figure.caption.25}% +\contentsline {figure}{\numberline {2.14}{\ignorespaces The \texttt {CT} type can leverage monad transformers in Haskell via \texttt {Reader} in combination with \texttt {IO}.}}{19}{figure.caption.26}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {3.1}{\ignorespaces Given a parametric record \texttt {ps} and a dynamic value \texttt {da}, the \textit {fmap} functor of the \texttt {CT} type applies the former to the latter. Because the final result is wrapped inside the \texttt {IO} shell, a second \textit {fmap} is necessary.}}{21}{figure.caption.27}% +\contentsline {figure}{\numberline {3.2}{\ignorespaces With the \texttt {Applicative} typeclass, it is possible to cope with functions inside the \texttt {CT} type. Again, the \textit {fmap} from \texttt {IO} is being used in the implementation.}}{22}{figure.caption.28}% +\contentsline {figure}{\numberline {3.3}{\ignorespaces The $>>=$ operator used in the implementation is the \textit {bind} from the \texttt {IO} shell. This indicates that when dealing with monads within monads, it is frequent to use the implementation of the internal members.}}{23}{figure.caption.29}% +\contentsline {figure}{\numberline {3.4}{\ignorespaces The typeclass \texttt {MonadIO} transforms a given value wrapped in \texttt {IO} into a different monad. In this case, the parameter \texttt {m} of the function is the output of the \texttt {CT} type.}}{23}{figure.caption.30}% +\contentsline {figure}{\numberline {3.5}{\ignorespaces The ability of lifting numerical values to the \texttt {CT} type resembles three FF-GPAC analog circuits: \texttt {Constant}, \texttt {Adder} and \texttt {Multiplier}.}}{24}{figure.caption.31}% +\contentsline {figure}{\numberline {3.6}{\ignorespaces State Machines are a common abstraction in computer science due to its easy mapping between function calls and states. Memory regions and peripherals are embedded with the idea of a state, not only pure functions. Further, side effects can even act as the trigger to move from one state to another, meaning that executing a simple function can do more than return a value. Its internal guts can significantly modify the state machine.}}{25}{figure.caption.32}% +\contentsline {figure}{\numberline {3.7}{\ignorespaces The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt {CT} and \texttt {Integrator} types match the four basic units.}}{30}{figure.caption.33}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {4.1}{\ignorespaces The integrator functions are essential to create and interconnect combinational and feedback-dependent circuits.}}{34}{figure.caption.34}% +\contentsline {figure}{\numberline {4.2}{\ignorespaces The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description.}}{34}{figure.caption.35}% +\contentsline {figure}{\numberline {4.3}{\ignorespaces Because the list implements the \texttt {Traversable} typeclass, it allows this type to use the \textit {traverse} and \textit {sequence} functions, in which both are related to changing the internal behaviour of the nested structures.}}{35}{figure.caption.36}% +\contentsline {figure}{\numberline {4.4}{\ignorespaces A \textit {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{35}{figure.caption.37}% +\contentsline {figure}{\numberline {4.5}{\ignorespaces When building a model for simulation, the above pipeline is always used, from both points of view. The operations with meaning, i.e., the ones in the \texttt {Semantics} pipeline, are mapped to executable operations in the \texttt {Operational} pipeline, and vice-versa.}}{36}{figure.caption.38}% +\contentsline {figure}{\numberline {4.6}{\ignorespaces Using only FF-GPAC's basic units and their composition rules, it's possible to model the Lorenz Attractor example.}}{39}{figure.caption.39}% +\contentsline {figure}{\numberline {4.7}{\ignorespaces After \textit {createInteg}, this record is the final image of the integrator. The function \textit {initialize} gives us protecting against wrong records of the type \texttt {Parameters}, assuring it begins from the first iteration, i.e., $t_0$.}}{40}{figure.caption.40}% +\contentsline {figure}{\numberline {4.8}{\ignorespaces After \textit {readInteg}, the final floating point values is obtained by reading from memory a computation and passing to it the received parameters record. The result of this application, $v$, is the returned value.}}{41}{figure.caption.41}% +\contentsline {figure}{\numberline {4.9}{\ignorespaces The \textit {updateInteg} function only does side effects, meaning that only affects memory. The internal variable \texttt {c} is a pointer to the computation \textit {itself}, i.e., the computation being created references this exact procedure.}}{41}{figure.caption.42}% +\contentsline {figure}{\numberline {4.10}{\ignorespaces After setting up the environment, this is the final depiction of an independent variable. The reader $x$ reads the values computed by the procedure stored in memory, a second-order Runge-Kutta method in this case.}}{42}{figure.caption.43}% +\contentsline {figure}{\numberline {4.11}{\ignorespaces The Lorenz's Attractor example has a very famous butterfly shape from certain angles and constant values in the graph generated by the solution of the differential equations..}}{43}{figure.caption.44}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {5.1}{\ignorespaces During simulation, functions change the time domain to the one that better fits certain entities, such as the \texttt {Solver} and the driver. The image is heavily inspired by a figure in~\cite {Edil2017}.}}{44}{figure.caption.45}% +\contentsline {figure}{\numberline {5.2}{\ignorespaces Updated auxiliary types for the \texttt {Parameters} type.}}{46}{figure.caption.46}% +\contentsline {figure}{\numberline {5.3}{\ignorespaces Linear interpolation is being used to transition us back to the continuous domain..}}{49}{figure.caption.47}% +\contentsline {figure}{\numberline {5.4}{\ignorespaces The new \textit {updateInteg} function add linear interpolation to the pipeline when receiving a parametric record.}}{50}{figure.caption.48}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {6.1}{\ignorespaces With just a few iterations, the exponential behaviour of the implementation is already noticeable.}}{52}{figure.caption.50}% +\contentsline {figure}{\numberline {6.2}{\ignorespaces The new \textit {createInteg} function relies on interpolation composed with memoization. Also, this combination \textit {produces} results from the computation located in a different memory region, the one pointed by the \texttt {computation} pointer in the integrator.}}{58}{figure.caption.52}% +\contentsline {figure}{\numberline {6.3}{\ignorespaces The function \textit {reads} information from the caching pointer, rather than the pointer where the solvers compute the results.}}{59}{figure.caption.53}% +\contentsline {figure}{\numberline {6.4}{\ignorespaces The new \textit {updateInteg} function gives to the solver functions access to the region with the cached data.}}{60}{figure.caption.54}% +\contentsline {figure}{\numberline {6.5}{\ignorespaces Caching changes the direction of walking through the iteration axis. It also removes an entire pass through the previous iterations.}}{61}{figure.caption.55}% +\contentsline {figure}{\numberline {6.6}{\ignorespaces By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation.}}{65}{figure.caption.58}% +\addvspace {10\p@ } +\contentsline {figure}{\numberline {7.1}{\ignorespaces Resettable counter in hardware, inspired by Levent's works~\cite {levent2000, levent2002}.}}{70}{figure.caption.59}% +\contentsline {figure}{\numberline {7.2}{\ignorespaces Diagram of \texttt {createInteg} primitive for intuition..}}{72}{figure.caption.60}% +\contentsline {figure}{\numberline {7.3}{\ignorespaces Results of FFACT are similar to the final version of FACT..}}{75}{figure.caption.61}% \addvspace {10\p@ } \addvspace {10\p@ } \babel@toc {american}{}\relax diff --git a/doc/MastersThesis/thesis.toc b/doc/MastersThesis/thesis.toc index 722023c..75eaf58 100644 --- a/doc/MastersThesis/thesis.toc +++ b/doc/MastersThesis/thesis.toc @@ -3,47 +3,50 @@ \babel@toc {american}{}\relax \contentsline {chapter}{\numberline {1}Introduction}{1}{chapter.1}% \contentsline {section}{\numberline {1.1}Contribution}{2}{section.1.1}% -\contentsline {section}{\numberline {1.2}Outline}{6}{section.1.2}% -\contentsline {chapter}{\numberline {2}Design Philosophy}{7}{chapter.2}% -\contentsline {section}{\numberline {2.1}Shannon's Foundation: GPAC}{7}{section.2.1}% -\contentsline {section}{\numberline {2.2}The Shape of Information}{9}{section.2.2}% -\contentsline {section}{\numberline {2.3}Modeling Reality}{13}{section.2.3}% -\contentsline {section}{\numberline {2.4}Making Mathematics Cyber}{15}{section.2.4}% -\contentsline {chapter}{\numberline {3}Effectful Integrals}{18}{chapter.3}% -\contentsline {section}{\numberline {3.1}Uplifting the CT Type}{18}{section.3.1}% -\contentsline {section}{\numberline {3.2}GPAC Bind I: CT}{21}{section.3.2}% -\contentsline {section}{\numberline {3.3}Exploiting Impurity}{23}{section.3.3}% -\contentsline {section}{\numberline {3.4}GPAC Bind II: Integrator}{26}{section.3.4}% -\contentsline {section}{\numberline {3.5}Using Recursion to solve Math}{28}{section.3.5}% -\contentsline {chapter}{\numberline {4}Execution Walkthrough}{31}{chapter.4}% -\contentsline {section}{\numberline {4.1}From Models to Models}{31}{section.4.1}% -\contentsline {section}{\numberline {4.2}Driving the Model}{34}{section.4.2}% -\contentsline {section}{\numberline {4.3}An attractive example}{35}{section.4.3}% -\contentsline {section}{\numberline {4.4}Lorenz's Butterfly}{41}{section.4.4}% -\contentsline {chapter}{\numberline {5}Travelling across Domains}{42}{chapter.5}% -\contentsline {section}{\numberline {5.1}Time Domains}{42}{section.5.1}% -\contentsline {section}{\numberline {5.2}Tweak I: Interpolation}{44}{section.5.2}% -\contentsline {chapter}{\numberline {6}Caching the Speed Pill}{49}{chapter.6}% -\contentsline {section}{\numberline {6.1}Performance}{49}{section.6.1}% -\contentsline {section}{\numberline {6.2}The Saving Strategy}{51}{section.6.2}% -\contentsline {section}{\numberline {6.3}Tweak II: Memoization}{52}{section.6.3}% -\contentsline {section}{\numberline {6.4}A change in Perspective}{58}{section.6.4}% -\contentsline {section}{\numberline {6.5}Tweak III: Model and Driver}{59}{section.6.5}% -\contentsline {section}{\numberline {6.6}Results with Caching}{61}{section.6.6}% -\contentsline {chapter}{\numberline {7}Fixing Recursion}{64}{chapter.7}% -\contentsline {section}{\numberline {7.1}Integrator's Noise}{64}{section.7.1}% -\contentsline {section}{\numberline {7.2}The Fixed-Point Combinator}{66}{section.7.2}% -\contentsline {section}{\numberline {7.3}Value Recursion with Fixed-Points}{67}{section.7.3}% -\contentsline {section}{\numberline {7.4}Tweak IV: Fixing FACT}{70}{section.7.4}% -\contentsline {chapter}{\numberline {8}Conclusion}{74}{chapter.8}% -\contentsline {section}{\numberline {8.1}Future Work}{74}{section.8.1}% -\contentsline {subsection}{\numberline {8.1.1}Formalism}{74}{subsection.8.1.1}% -\contentsline {subsection}{\numberline {8.1.2}Extensions}{75}{subsection.8.1.2}% -\contentsline {subsection}{\numberline {8.1.3}Refactoring}{75}{subsection.8.1.3}% -\contentsline {section}{\numberline {8.2}Final Thoughts}{76}{section.8.2}% -\contentsline {chapter}{\numberline {9}Appendix}{77}{chapter.9}% -\contentsline {section}{\numberline {9.1}Literate Programming}{77}{section.9.1}% -\contentsline {chapter}{References}{79}{section*.57}% +\contentsline {subsection}{\numberline {1.1.1}Executable Simulation}{3}{subsection.1.1.1}% +\contentsline {subsection}{\numberline {1.1.2}Formal Foundation}{4}{subsection.1.1.2}% +\contentsline {subsection}{\numberline {1.1.3}Conciseness}{5}{subsection.1.1.3}% +\contentsline {section}{\numberline {1.2}Outline}{7}{section.1.2}% +\contentsline {chapter}{\numberline {2}Design Philosophy}{9}{chapter.2}% +\contentsline {section}{\numberline {2.1}Shannon's Foundation: GPAC}{9}{section.2.1}% +\contentsline {section}{\numberline {2.2}The Shape of Information}{11}{section.2.2}% +\contentsline {section}{\numberline {2.3}Modeling Reality}{15}{section.2.3}% +\contentsline {section}{\numberline {2.4}Making Mathematics Cyber}{17}{section.2.4}% +\contentsline {chapter}{\numberline {3}Effectful Integrals}{20}{chapter.3}% +\contentsline {section}{\numberline {3.1}Uplifting the CT Type}{20}{section.3.1}% +\contentsline {section}{\numberline {3.2}GPAC Bind I: CT}{23}{section.3.2}% +\contentsline {section}{\numberline {3.3}Exploiting Impurity}{25}{section.3.3}% +\contentsline {section}{\numberline {3.4}GPAC Bind II: Integrator}{28}{section.3.4}% +\contentsline {section}{\numberline {3.5}Using Recursion to solve Math}{30}{section.3.5}% +\contentsline {chapter}{\numberline {4}Execution Walkthrough}{33}{chapter.4}% +\contentsline {section}{\numberline {4.1}From Models to Models}{33}{section.4.1}% +\contentsline {section}{\numberline {4.2}Driving the Model}{36}{section.4.2}% +\contentsline {section}{\numberline {4.3}An attractive example}{37}{section.4.3}% +\contentsline {section}{\numberline {4.4}Lorenz's Butterfly}{43}{section.4.4}% +\contentsline {chapter}{\numberline {5}Travelling across Domains}{44}{chapter.5}% +\contentsline {section}{\numberline {5.1}Time Domains}{44}{section.5.1}% +\contentsline {section}{\numberline {5.2}Tweak I: Interpolation}{46}{section.5.2}% +\contentsline {chapter}{\numberline {6}Caching the Speed Pill}{51}{chapter.6}% +\contentsline {section}{\numberline {6.1}Performance}{51}{section.6.1}% +\contentsline {section}{\numberline {6.2}The Saving Strategy}{53}{section.6.2}% +\contentsline {section}{\numberline {6.3}Tweak II: Memoization}{54}{section.6.3}% +\contentsline {section}{\numberline {6.4}A change in Perspective}{60}{section.6.4}% +\contentsline {section}{\numberline {6.5}Tweak III: Model and Driver}{61}{section.6.5}% +\contentsline {section}{\numberline {6.6}Results with Caching}{63}{section.6.6}% +\contentsline {chapter}{\numberline {7}Fixing Recursion}{66}{chapter.7}% +\contentsline {section}{\numberline {7.1}Integrator's Noise}{66}{section.7.1}% +\contentsline {section}{\numberline {7.2}The Fixed-Point Combinator}{67}{section.7.2}% +\contentsline {section}{\numberline {7.3}Value Recursion with Fixed-Points}{69}{section.7.3}% +\contentsline {section}{\numberline {7.4}Tweak IV: Fixing FACT}{72}{section.7.4}% +\contentsline {chapter}{\numberline {8}Conclusion}{76}{chapter.8}% +\contentsline {section}{\numberline {8.1}Final Thoughts}{76}{section.8.1}% +\contentsline {section}{\numberline {8.2}Future Work}{77}{section.8.2}% +\contentsline {subsection}{\numberline {8.2.1}Formalism}{77}{subsection.8.2.1}% +\contentsline {subsection}{\numberline {8.2.2}Extensions}{77}{subsection.8.2.2}% +\contentsline {subsection}{\numberline {8.2.3}Refactoring}{78}{subsection.8.2.3}% +\contentsline {chapter}{\numberline {9}Appendix}{79}{chapter.9}% +\contentsline {section}{\numberline {9.1}Literate Programming}{79}{section.9.1}% +\contentsline {chapter}{References}{81}{section*.62}% \babel@toc {american}{}\relax \babel@toc {american}{}\relax \babel@toc {american}{}\relax diff --git a/src/Examples/Lorenz.hs b/src/Examples/Lorenz.hs index 3fe3455..225e026 100644 --- a/src/Examples/Lorenz.hs +++ b/src/Examples/Lorenz.hs @@ -47,7 +47,7 @@ lorenzSolver100M = Solver { dt = 0.000001, method = RungeKutta2, stage = SolverStage 0 } - + lorenzSolver1B = Solver { dt = 0.0000001, method = RungeKutta2, @@ -76,6 +76,13 @@ lorenzModel = mdo beta = 8.0 / 3.0 return $ sequence [x, y, z] +lorenzSolverYampa = Solver { dt = 0.01, + method = Euler, + stage = SolverStage 0 + } + +lorenzYampa = runCTFinal lorenzModel 1000 lorenzSolverYampa + lorenz100 = runCTFinal lorenzModel 100 lorenzSolver100 lorenz1k = runCTFinal lorenzModel 100 lorenzSolver1k From 09ec77fd87bc85d3ba80585d13441129bfa0589f Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Mon, 24 Mar 2025 01:41:31 -0300 Subject: [PATCH 06/10] Add simulink reference --- doc/MastersThesis/Lhs/Conclusion.lhs | 2 +- doc/MastersThesis/Lhs/Introduction.lhs | 2 +- doc/MastersThesis/thesis.lof | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/MastersThesis/Lhs/Conclusion.lhs b/doc/MastersThesis/Lhs/Conclusion.lhs index 6eb4189..ddb2c1a 100644 --- a/doc/MastersThesis/Lhs/Conclusion.lhs +++ b/doc/MastersThesis/Lhs/Conclusion.lhs @@ -26,7 +26,7 @@ use of the chosen typeclasses. \subsection{Extensions} -As explained in chapters 1 and 2, there are some extensions that increase the capabilities of Shannon's original GPAC model. One of these extensions, FF-GPAC, was the one chosen to be modeled via software. However, there are other extensions that not only expand the types of functions that can be modeled, e.g., hypertranscendental functions, but also explore new properties, such as Turing universitality~\cite{Graca2004, Graca2016}. The proposed software didn't touch on those enhancements and restricted the set of functions to only algebraic functions. More recent extensions of GPAC should also be explored to simulate an even broader set of functions present in the continuous time domain. +As explained in Chapters 1 and 2, there are some extensions that increase the capabilities of Shannon's original GPAC model. One of these extensions, FF-GPAC, was the one chosen to be modeled via software. However, there are other extensions that not only expand the types of functions that can be modeled, e.g., hypertranscendental functions, but also explore new properties, such as Turing universitality~\cite{Graca2004, Graca2016}. The proposed software didn't touch on those enhancements and restricted the set of functions to only algebraic functions. More recent extensions of GPAC should also be explored to simulate an even broader set of functions present in the continuous time domain. In regards to numerical methods, one of the immediate improvements would be to use \textit{adaptive} size for the solver time step that \textit{change dynamically} in run time. This strategy controls the errors accumulated when using the derivative by adapting the size of the time step. Hence, it starts backtracking previous steps with smaller time steps until some error threshold is satisfied, thus providing finer and granular control to the numerical methods, coping with approximation errors due to larger time steps. diff --git a/doc/MastersThesis/Lhs/Introduction.lhs b/doc/MastersThesis/Lhs/Introduction.lhs index 6ee70eb..9644185 100644 --- a/doc/MastersThesis/Lhs/Introduction.lhs +++ b/doc/MastersThesis/Lhs/Introduction.lhs @@ -120,7 +120,7 @@ are being omitted. \centering \includegraphics[width=0.95\linewidth]{MastersThesis/img/lorenzSimulink} \end{minipage} -\caption{Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation.} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation~\cite{Simulink}.} \label{fig:lorenz-simulink} \end{figure} diff --git a/doc/MastersThesis/thesis.lof b/doc/MastersThesis/thesis.lof index e0147e7..e593505 100644 --- a/doc/MastersThesis/thesis.lof +++ b/doc/MastersThesis/thesis.lof @@ -3,7 +3,7 @@ \babel@toc {american}{}\relax \addvspace {10\p@ } \contentsline {figure}{\numberline {1.1}{\ignorespaces The translation between the world of software and the mathematical description of differential equations are explicit in the final version of \texttt {FACT}.}}{4}{figure.caption.8}% -\contentsline {figure}{\numberline {1.2}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation.}}{6}{figure.caption.9}% +\contentsline {figure}{\numberline {1.2}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation~\cite {Simulink}.}}{6}{figure.caption.9}% \contentsline {figure}{\numberline {1.3}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.}}{7}{figure.caption.10}% \contentsline {figure}{\numberline {1.4}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Python implementation.}}{7}{figure.caption.11}% \contentsline {figure}{\numberline {1.5}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Mathematica implementation.}}{7}{figure.caption.12}% From d013219f7949c49eb6b816e4f3067b196c3007c1 Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Mon, 24 Mar 2025 01:46:38 -0300 Subject: [PATCH 07/10] Add yampa reference --- doc/MastersThesis/Lhs/Introduction.lhs | 2 +- doc/MastersThesis/bibliography.bib | 17 +++++++++++++++++ doc/MastersThesis/thesis.lof | 2 +- 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/doc/MastersThesis/Lhs/Introduction.lhs b/doc/MastersThesis/Lhs/Introduction.lhs index 9644185..0bcdb0a 100644 --- a/doc/MastersThesis/Lhs/Introduction.lhs +++ b/doc/MastersThesis/Lhs/Introduction.lhs @@ -239,7 +239,7 @@ are being omitted. returnA -< (x, y, z) \end{purespec} \end{minipage} -\caption{Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation (also in Haskell).} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation~\cite{Yampa} (also in Haskell).} \label{fig:lorenz-yampa} \end{figure} diff --git a/doc/MastersThesis/bibliography.bib b/doc/MastersThesis/bibliography.bib index f416efe..148ed59 100644 --- a/doc/MastersThesis/bibliography.bib +++ b/doc/MastersThesis/bibliography.bib @@ -448,4 +448,21 @@ @article{Simulink volume = {4}, journal = {IOSR Journal of Engineering}, doi = {10.9790/3021-04252933} +} + +@inproceedings{Yampa, +author = {Perez, Ivan}, +title = {The Beauty and Elegance of Functional Reactive Animation}, +year = {2023}, +isbn = {9798400702952}, +publisher = {Association for Computing Machinery}, +address = {New York, NY, USA}, +url = {https://doi.org/10.1145/3609023.3609806}, +doi = {10.1145/3609023.3609806}, +booktitle = {Proceedings of the 11th ACM SIGPLAN International Workshop on Functional Art, Music, Modelling, and Design}, +pages = {8–20}, +numpages = {13}, +keywords = {Functional Reactive Programming, animation, dataflow, domain-specific languages}, +location = {Seattle, WA, USA}, +series = {FARM 2023} } \ No newline at end of file diff --git a/doc/MastersThesis/thesis.lof b/doc/MastersThesis/thesis.lof index e593505..8d5bd3e 100644 --- a/doc/MastersThesis/thesis.lof +++ b/doc/MastersThesis/thesis.lof @@ -7,7 +7,7 @@ \contentsline {figure}{\numberline {1.3}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.}}{7}{figure.caption.10}% \contentsline {figure}{\numberline {1.4}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Python implementation.}}{7}{figure.caption.11}% \contentsline {figure}{\numberline {1.5}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Mathematica implementation.}}{7}{figure.caption.12}% -\contentsline {figure}{\numberline {1.6}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation (also in Haskell).}}{7}{figure.caption.13}% +\contentsline {figure}{\numberline {1.6}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation~\cite {Yampa} (also in Haskell).}}{7}{figure.caption.13}% \addvspace {10\p@ } \contentsline {figure}{\numberline {2.1}{\ignorespaces The combination of these four basic units compose any GPAC circuit (taken from~\cite {Edil2018} with permission).}}{10}{figure.caption.14}% \contentsline {figure}{\numberline {2.2}{\ignorespaces Polynomial circuits resembles combinational circuits, in which the circuit respond instantly to changes on its inputs (taken from~\cite {Edil2018} with permission).}}{11}{figure.caption.15}% From 9bf7f8fca907aac6f865d17b76921ed926d0ed1b Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Sun, 6 Apr 2025 23:40:08 -0300 Subject: [PATCH 08/10] Fix typos in various chapters --- doc/MastersThesis/Lhs/Caching.lhs | 14 ++++++------ doc/MastersThesis/Lhs/Conclusion.lhs | 9 +++++--- doc/MastersThesis/Lhs/Design.lhs | 14 ++++++------ doc/MastersThesis/Lhs/Enlightenment.lhs | 16 +++++++------- doc/MastersThesis/Lhs/Fixing.lhs | 27 ++++++++++++++--------- doc/MastersThesis/Lhs/Implementation.lhs | 28 ++++++++++++------------ doc/MastersThesis/Lhs/Interpolation.lhs | 12 +++++----- doc/MastersThesis/Lhs/Introduction.lhs | 10 ++++----- doc/MastersThesis/tex/abstract.tex | 2 +- doc/MastersThesis/tex/dedication.tex | 2 +- doc/MastersThesis/thesis.lof | 15 +++++++------ doc/MastersThesis/thesis.toc | 22 +++++++++---------- 12 files changed, 91 insertions(+), 80 deletions(-) diff --git a/doc/MastersThesis/Lhs/Caching.lhs b/doc/MastersThesis/Lhs/Caching.lhs index 2b287b6..c750e5f 100644 --- a/doc/MastersThesis/Lhs/Caching.lhs +++ b/doc/MastersThesis/Lhs/Caching.lhs @@ -66,7 +66,7 @@ Chapter 5, \textit{Travelling across Domains}, leveraged a major concern with th \section{Performance} -The simulations executed in \texttt{FACT} take too long to run. For instance, to execute the Lorenz's Attractor example using the second-order Runge-Kutta method with an unrealistic time step size for real simulations (time step of $1$ second), the simulator can take around \textit{10 seconds} to compute 0 to 5 seconds of the physical system with a testbench using a \texttt{Ryzen 7 5700X} AMD processor and 128GB of RAM. Increasing this interval shows an exponential growth in execution time, as depicted by Table \ref{tab:execTimes} and by Figure \ref{fig:graph1} (values obtained after the interpolation tweak). Although the memory use is also problematic, it is hard to reason about those numbers due to Haskell's \textit{garbage collector}~\footnote{Garbage Collector \href{https://wiki.haskell.org/GHC/Memory\_Management}{\textcolor{blue}{wiki page}}.}, a memory manager that deals with Haskell's \textit{immutability}. Thus, the memory values serve just to solidify the notion that \texttt{FACT} is inneficient, showing an exponentinal growth in resource use, which makes it impractical to execute longer simulations and diminishes the usability of the proposed software. +The simulations executed in \texttt{FACT} take too long to run. For instance, to execute the Lorenz's Attractor example using the second-order Runge-Kutta method with an unrealistic time step size for real simulations (time step of $1$ second), the simulator can take around \textit{10 seconds} to compute 0 to 5 seconds of the physical system with a testbench using a \texttt{Ryzen 7 5700X} AMD processor and 128GB of RAM. Increasing this interval shows an exponential growth in execution time, as depicted by Table \ref{tab:execTimes} and by Figure \ref{fig:graph1} (values obtained after the interpolation tweak). Although the memory use is also problematic, it is hard to reason about those numbers due to Haskell's \textit{garbage collector}~\footnote{Garbage Collector \href{https://wiki.haskell.org/GHC/Memory\_Management}{\textcolor{blue}{wiki page}}.}, a memory manager that deals with Haskell's \textit{immutability}. Thus, the memory values serve just to solidify the notion that \texttt{FACT} is inneficient, showing an exponential growth in resource use, which makes it impractical to execute longer simulations and diminishes the usability of the proposed software. \begin{table}[H] \centering @@ -115,7 +115,7 @@ integEuler diff i y = do From Chapter 3, we know that lines 10 to 13 serve the purpose of creating a new parametric record to execute a new solver step for the \textit{previous} iteration, in order to calculate the current one. From Chapter 4, this code section turned out to be where the implicit recursion came in, because the current iteration needs to calculate the previous one. Effectively, this means that for \textit{all} iterations, \textit{all} previous steps from each one needs to be calculated. The problem is now clear: unnecessary computations are being made for all iterations, because the same solvers steps are not being saved for future steps, although these values do \textit{not} change. In other words, to calculate step 3 of the solver, steps 1 and 2 are the same to calculate step 4 as well, but these values are being lost during the simulation. -To estimate how this lack of optimization affects performance, we can calculate how many solver steps will be executed to simulate theLorenz's Attractor example used in Chapter 4, \textit{Execution Walkthrough}. The Table \ref{tab:solverSteps} shows the total number of solver steps needed per iteration simulating the Lorenz example with the Euler method. In addition, the amount of steps also increase depending on which solver method is being used, given that in the higher order Runge-Kutta methods, multiple stages count as a new step as well. +To estimate how this lack of optimization affects performance, we can calculate how many solver steps will be executed to simulate theLorenz's Attractor example used in Chapter 4, \textit{Execution Walkthrough}. Table \ref{tab:solverSteps} shows the total number of solver steps needed per iteration simulating the Lorenz example with the Euler method. In addition, the amount of steps also increase depending on which solver method is being used, given that in the higher order Runge-Kutta methods, multiple stages count as a new step as well. \begin{table}[H] \centering @@ -138,7 +138,7 @@ This is the cause of the imense hit in performance. However, it also clarifies t The first tweak, \textit{Memoization}, alters the \texttt{Integrator} type. The integrator will now have a pointer to the memory region that stores the previous computed values, meaning that before executing a new computation, it will consult this region first. Because the process is executed in a \textit{sequential} manner, it is guaranteed that the previous result will be used. Thus, the accumulation of the solver steps will be addressed, and the amount of steps will be equal to the amount of iterations times how many stages the solver method uses. -The \textit{memo} function creates this memory region for storing values, as well as providing read access to it. This is the only function in \texttt{FACT} that uses a \textit{constraint}, i.e., it restricts the parametric types to the ones that have implemented the requirement. In our case, this function requires that the internal type \texttt{CT} dependency has implemented the \texttt{UMemo} typeclass. Because this typeclass is too complicated to be in the scope of this project, we will settle with the following explanation: it is required that the parametric values are capable of being contained inside an \textit{mutable} array, which is the case for our \texttt{Double} values. As dependencies, the \textit{memo} function receives the computation, as well as the interpolation function that is assumed to be used, in order to attenuate the domain problem described in the previous Chapter. This means that at the end, the final result will be piped to the interpolation function. +The \textit{memo} function creates this memory region for storing values, as well as providing read access to it. This is the only function in \texttt{FACT} that uses a \textit{constraint}, i.e., it restricts the parametric types to the ones that have implemented the requirement. In our case, this function requires that the internal type \texttt{CT} dependency has implemented the \texttt{UMemo} typeclass. Because this typeclass is too complicated to be in the scope of this project, we will settle with the following explanation: it is required that the parametric values are capable of being contained inside a \textit{mutable} array, which is the case for our \texttt{Double} values. As dependencies, the \textit{memo} function receives the computation as well as the interpolation function that is assumed to be used, in order to attenuate the domain problem described in the previous Chapter. This means that at the end, the final result will be piped to the interpolation function. \begin{code} memo :: UMemo e => (CT e -> CT e) -> CT e -> CT (CT e) @@ -182,7 +182,7 @@ memo interpolate m = do The function starts by getting how many iterations will occur in the simulation, as well as how many stages the chosen method uses (lines 5 to 7). This is used to pre-allocate the minimum amount of memory required for the execution (line 8). This mutable array is two-dimensional and can be viewed as a table in which the number of iterations and stages determine the number of rows and columns. Pointers to iterate accross the table are declared as \textit{nref} and \textit{stref} (lines 9 and 10), to read iteration and stage values respectively. The code block from line 11 to line 36 delimit a procedure or computation that will only be used when needed, and it is being called at the end of the \textit{memo} function (line 37). -The next step is to follow the exection of this internal function. From line 13 to line 17, auxiliar "variables", i.e., labels to read information, are created to facilitate manipulation of the solver (\texttt{sl}), interval (\texttt{iv}), current iteration (\texttt{n}), current stage (\texttt{st}) and the final stage used in a solver step (\texttt{stu}). The definition of \textit{loop}, which starts at line 18 and closes at line 33, uses all the previously created labels. The conditional block (line 19 to 33) will store in the pre-allocated memory region the computed values and, because they are stored in a \textit{sequential} way, the stop condition of the loop is one of the following: the iteration counter of the loop (\texttt{n'}) surpassed the current iteration \textit{or} the iteration counter matches the current iteration \textit{and} the stage counter (\texttt{st'}) reached the ceiling of stages of used solver method (line 19). When the loop stops, it \textit{reads} from the allocated array the value of interest (line 21), given that it is guaranteed that is already in memory. If this condition is not true, it means that further iterations in the loop need to occur in one of the two axis, iteration or stage. +The next step is to follow the exection of this internal function. From line 13 to line 17, auxiliar "variables", i.e., labels to read information, are created to facilitate manipulation of the solver (\texttt{sl}), interval (\texttt{iv}), current iteration (\texttt{n}), current stage (\texttt{st}) and the final stage used in a solver step (\texttt{stu}). The definition of \textit{loop}, which starts at line 18 and closes at line 33, uses all the previously created labels. The conditional block (line 19 to 33) will store in the pre-allocated memory region the computed values and, because they are stored in a \textit{sequential} way, the stop condition of the loop is one of the following: the iteration counter of the loop (\texttt{n'}) surpassed the current iteration \textit{or} the iteration counter matches the current iteration \textit{and} the stage counter (\texttt{st'}) reached the ceiling of stages of used solver method (line 19). When the loop stops, it \textit{reads} from the allocated array the value of interest (line 21), given that it is guaranteed that it is already in memory. If this condition is not true, it means that further iterations in the loop need to occur in one of the two axis, iteration or stage. The first step towards that goal is to save the value of the current iteration and stage into memory. The continuous machine \texttt{m}, received as a dependency in line 3, is used to compute a new result with the current counters for iteration and stage (lines 23 to 26). Then, this new value is written into the array (line 27). The condition in line 28 checks if the current stage already achieved its maximum possible value. In that case, the counters for stage and iteration counters will be reset to the first stage (line 29) of the next iteration (line 30) respectively, and the loop should continue (line 31). Otherwise, we need to advance to the next stage within the same iteration and an updated stage (line 32). The loop should continue with the same iteration counter but with the stage counter incremented (lines 32 and 33). @@ -286,7 +286,7 @@ Figure \ref{fig:memoDirection} depicts this stark difference in approach when us \section{Tweak III: Model and Driver} -The memoization added to \texttt{FACT} needs a second tweak, related to the executable models established in Chapter 4. The code bellow is the same example model used in that Chapter: +The memoization added to \texttt{FACT} needs a second tweak, related to the executable models established in Chapter 4. The following code is the same example model used in that Chapter: \begin{spec} exampleModel :: Model Vector @@ -300,7 +300,7 @@ exampleModel = sequence [x, y] \end{spec} -The caching strategy assumes that the created mutable array will be available for the entire simulation. However, the proposed models will \textit{always} discard the table created by the \textit{createInteg} function due to the garbage collector~\footnote{Garbage Collector \href{https://wiki.haskell.org/GHC/Memory\_Management}{\textcolor{blue}{wiki page}}.}, after the \textit{sequence} function. Even worse, the table will be created again each time the model is being called and a parametric record is being provided, which happens when using the driver. Thus, the proposed solution to address this problem is to update the \texttt{Model} alias to a \textit{function} of the model. This can be achieved by \textit{wrapping} the state vector with a the \texttt{CT} type, i.e., wrapping the model using the function \textit{pure} or \textit{return}. In this manner, the computation will be "placed" as a side effect of the \texttt{IO} monad and Haskell's memory management system will not remove the table used for caching, in the first computation. So, the following code is the new type alias, alongside the previous example model using the \textit{return} function: +The caching strategy assumes that the created mutable array will be available for the entire simulation. However, the proposed models will \textit{always} discard the table created by the \textit{createInteg} function due to the garbage collector~\footnote{Garbage Collector \href{https://wiki.haskell.org/GHC/Memory\_Management}{\textcolor{blue}{wiki page}}.}, after the \textit{sequence} function. Even worse, the table will be created again each time the model is being called and a parametric record is being provided, which happens when using the driver. Thus, the proposed solution to address this problem is to update the \texttt{Model} alias to a \textit{function} of the model. This can be achieved by \textit{wrapping} the state vector with a the \texttt{CT} type, i.e., wrapping the model using the function \textit{pure} or \textit{return}. In this manner, the computation will be "placed" as a side effect of the \texttt{IO} monad and Haskell's memory management system will not remove the table used for caching in the first computation. So, the following code is the new type alias, alongside the previous example model using the \textit{return} function: \begin{spec} type Model a = CT (CT a) @@ -316,7 +316,7 @@ exampleModel = return $ sequence [x, y] \end{spec} -Due to the new type signature, this change implies changing the driver, i.e., modify the function \textit{runCT} (the changes are analogus to the \textit{runCTFinal} function variant). Further, a new auxiliary function was created, \textit{subRunCT}, to separate the environment into two functions. The \textit{runCT} will execute the mapping with the function \textit{parameterise} and the auxiliary function will address the need for interpolation. +Due to the new type signature, this change implies changing the driver, i.e., modifying the function \textit{runCT} (the changes are analogus to the \textit{runCTFinal} function variant). Further, a new auxiliary function was created, \textit{subRunCT}, to separate the environment into two functions. The \textit{runCT} will execute the mapping with the function \textit{parameterize} and the auxiliary function will address the need for interpolation. \begin{code} runCT :: Model a -> Double -> Solver -> IO [a] diff --git a/doc/MastersThesis/Lhs/Conclusion.lhs b/doc/MastersThesis/Lhs/Conclusion.lhs index ddb2c1a..9e70d10 100644 --- a/doc/MastersThesis/Lhs/Conclusion.lhs +++ b/doc/MastersThesis/Lhs/Conclusion.lhs @@ -1,5 +1,6 @@ -Chapter 2 established the foundation of the implementation, introducing FP concepts and the necessary types -to model continuous time simulation --- with \texttt{CT} being the main type. Chapter 3 extended its power via +Chapter 2 established the foundation of the implementation, introducing +functional programming (FP) concepts and the necessary types +to model continuous time simulation --- with continuous time machines (\texttt{CT}) being the main type. Chapter 3 extended its power via the implementation of typeclasses to add functionality for the \texttt{CT} type, such as binary operations and numerical representation. Further, it also introduced the \texttt{Integrator}, a CRUD-like interface for it, as well as the available numerical methods for simulation. @@ -17,6 +18,8 @@ The \texttt{FACT} EDSL~\footnote{\texttt{FACT} \href{https://github.com/FP-Model \section{Future Work} +The following subsections describe the three main areas for future improvements in \texttt{FFACT}: formalism, possible extensions, and code refactoring. + \subsection{Formalism} One of the main concerns is the \textit{correctness} of \texttt{FACT} between its specification and its final implementation, i.e., refinement. Shannon's GPAC concept acted as the specification of the project, whilst the proposed software attempted to implement it. The criteria used to verify that the software fulfilled its goal were by using it for simulation and via code inspection, both of which are based on human analysis. This connection, however, was \textit{not} formally verified --- no model checking tools were used for its validation. In order to know that the mathematical description of the problem is being correctly mapped onto a model representation some formal work needs to be done. This was not explored, and it was considered out of the scope for this work. @@ -26,7 +29,7 @@ use of the chosen typeclasses. \subsection{Extensions} -As explained in Chapters 1 and 2, there are some extensions that increase the capabilities of Shannon's original GPAC model. One of these extensions, FF-GPAC, was the one chosen to be modeled via software. However, there are other extensions that not only expand the types of functions that can be modeled, e.g., hypertranscendental functions, but also explore new properties, such as Turing universitality~\cite{Graca2004, Graca2016}. The proposed software didn't touch on those enhancements and restricted the set of functions to only algebraic functions. More recent extensions of GPAC should also be explored to simulate an even broader set of functions present in the continuous time domain. +As explained in Chapters 1 and 2, there are some extensions that increase the capabilities of Shannon's original GPAC model. One of these extensions, FF-GPAC, was the one chosen to be modeled via software. However, there are other extensions that not only expand the types of functions that can be modeled, e.g., hypertranscendental functions, but also explore new properties, such as Turing universality~\cite{Graca2004, Graca2016}. The proposed software didn't touch on those enhancements and restricted the set of functions to only algebraic functions. More recent extensions of GPAC should also be explored to simulate an even broader set of functions present in the continuous time domain. In regards to numerical methods, one of the immediate improvements would be to use \textit{adaptive} size for the solver time step that \textit{change dynamically} in run time. This strategy controls the errors accumulated when using the derivative by adapting the size of the time step. Hence, it starts backtracking previous steps with smaller time steps until some error threshold is satisfied, thus providing finer and granular control to the numerical methods, coping with approximation errors due to larger time steps. diff --git a/doc/MastersThesis/Lhs/Design.lhs b/doc/MastersThesis/Lhs/Design.lhs index 70e609b..58ec516 100644 --- a/doc/MastersThesis/Lhs/Design.lhs +++ b/doc/MastersThesis/Lhs/Design.lhs @@ -24,7 +24,7 @@ In order to add a formal basis to the machine, Shannon built the GPAC model, a m \item Integrator: Given two inputs --- $u(t)$ and $v(t)$ --- and an initial condition $w_0$ at time $t_0$, the unit generates the output $w(t) = w_0 + \int_{t_0}^{t} u(t) \,dv(t)$, where $u$ is the \textit{integrand} and $v$ is the \textit{variable of integration}. \end{itemize} -Composition rules that restrict how these units can be hooked to one another. Shannon established that a valid GPAC is the one in which two inputs and two outputs are not interconnected and the inputs are only driven by either the independent variable $t$ (usually \textit{time}) or by a single unit output~\cite{Edil2018, Graca2003, Shannon}. Daniel's GPAC extension, FF-GPAC~\cite{Graca2003}, added new constraints related to no-feedback GPAC configurations while still using the same four basic units. These structures, so-called \textit{polynomial circuits}~\cite{Edil2018, Graca2004}, are being displayed in Figure \ref{fig:gpacComposition} and they are made by only using constant function units, adders and multipliers. Also, such circuits are \textit{combinational}, meaning that they compute values in a \textit{point-wise} manner between the given inputs. Thus, FF-GPAC's composition rules are the following: +Composition rules that restrict how these units can be connected to one another. Shannon established that a valid GPAC is the one in which two inputs and two outputs are not interconnected and the inputs are only driven by either the independent variable $t$ (usually \textit{time}) or by a single unit output~\cite{Edil2018, Graca2003, Shannon}. Daniel's GPAC extension, FF-GPAC~\cite{Graca2003}, added new constraints related to no-feedback GPAC configurations while still using the same four basic units. These structures, so-called \textit{polynomial circuits}~\cite{Edil2018, Graca2004}, are being displayed in Figure \ref{fig:gpacComposition} and they are made by only using constant function units, adders and multipliers. Also, such circuits are \textit{combinational}, meaning that they compute values in a \textit{point-wise} manner between the given inputs. Thus, FF-GPAC's composition rules are the following: \figuraBib{GPACComposition}{Polynomial circuits resembles combinational circuits, in which the circuit respond instantly to changes on its inputs (taken from~\cite{Edil2018} with permission)}{}{fig:gpacComposition}{width=.55\textwidth}% @@ -35,7 +35,7 @@ Composition rules that restrict how these units can be hooked to one another. Sh \item Each variable of integration of an integrator is the input \textit{t}. \end{itemize} -During the definition of the DSL, parallels will map the aforementioned basic units and composition rules to the implementation. With this strategy, all the mathematical formalism leveraged for analog computers will drive the implementation in the digital computer. Although we do not formally prove a refinement between the GPAC theory, i.e., our epurespecification, and the final implementation of \texttt{FACT}, is an attempt to build a tool with formalism taken into account; one of the most frequent critiques in the CPS domain, as explained in the previous Chapter. +During the definition of the DSL, parallels will map the aforementioned basic units and composition rules to the implementation. With this strategy, all the mathematical formalism leveraged for analog computers will drive the implementation in the digital computer. Although we do not formally prove a refinement between the GPAC theory, i.e., our specification, and the final implementation of \texttt{FACT}, is an attempt to build a tool with formalism taken into account; one of the most frequent critiques in the CPS domain, as explained in the previous Chapter. \section{The Shape of Information} \label{sec:types} @@ -127,7 +127,7 @@ Within algebraic data types, it is possible to abstract the \textit{structure} o \label{fig:parametricPoly} \end{figure} -In some situations, changing the type of the structure is not the desired property of interest. There are applications where some sort of \textit{behaviour} is a necessity, e.g., the ability of comparing two instances of a custom type. This nature of polymorphism is known as \textit{ad hoc polymorphism}, which is implemented in Haskell via what is similar to java-like interfaces, so-called \textit{typeclasses}~\cite{Wadler1989}. However, establishing a contract with a typeclass differs from an interface in a fundamental apurespect: rather than inheritance being given to the type, it has a lawful implementation, meaning that \textit{mathematical formalism} is assured for it, although the implementer is not obligated to prove its laws on a language level. As an example, the implementation of the typeclass \texttt{Eq} gives to the type all comparable operations ($==$ and $!=$). Figure \ref{fig:adHocPoly} shows the implementation of \texttt{Ord} typeclass for the presented \texttt{ClockTime}, giving it capabilities for sorting instances of such type. +In some situations, changing the type of the structure is not the desired property of interest. There are applications where some sort of \textit{behaviour} is a necessity, e.g., the ability of comparing two instances of a custom type. This nature of polymorphism is known as \textit{ad hoc polymorphism}, which is implemented in Haskell via what is similar to java-like interfaces, so-called \textit{typeclasses}~\cite{Wadler1989}. However, establishing a contract with a typeclass differs from an interface in a fundamental aspect: rather than inheritance being given to the type, it has a lawful implementation, meaning that \textit{mathematical formalism} is assured for it, although the implementer is not obligated to prove its laws on a language level. As an example, the implementation of the typeclass \texttt{Eq} gives to the type all comparable operations ($==$ and $!=$). Figure \ref{fig:adHocPoly} shows the implementation of \texttt{Ord} typeclass for the presented \texttt{ClockTime}, giving it capabilities for sorting instances of such type. \begin{figure}[ht!] \centering @@ -152,23 +152,23 @@ In some situations, changing the type of the structure is not the desired proper Algebraic data types, when combined with polymorphism, are a powerful tool in programming, being a useful way to model the domain of interest. However, both sum and product types cannot portray by themselves the intuition of a \textit{procedure}. A data transformation process, as showed in Figure \ref{fig:functions}, can be utilized in a variety of different ways. Imagine, for instance, a system where validation can vary according to the current situation. Any validation algorithm would be using the same data, such as a record called \texttt{SystemData}, and returning a boolean as the result of the validation, but the internals of these functions would be totally different. This is represented in Figure \ref{fig:pipeline}. In Haskell, this motivates the use of functions as \textit{first class citizens}, meaning that they are values and can be treated equally in comparison with data types that carries information, such as being used as arguments to another functions, so-called high order functions. -\figuraBib{Pipeline}{Replacements for the validation function within a pipeline like the above is common}{}{fig:pipeline}{width=.75\textwidth}% +\figuraBib{Pipeline}{Replacements for the validation function within a pipeline like the above are common}{}{fig:pipeline}{width=.75\textwidth}% \section{Modeling Reality} \label{sec:diff} -The continuous time problem explained in the introduction was initially addressed by mathematics, which represents physical quantities by \textit{differential equations}. This set of equations establishes a relationship between functions and their repurespective derivatives; the function express the variable of interest and its derivative describe how it changes over time. It is common in the engineering and in the physics domain to know the rate of change of a given variable, but the function itself is still unknown. These variables describe the state of the system, e.g, velocity, water flow, electrical current, etc. When those variables are allowed to vary continuously --- in arbitrarily small increments --- differential equations arise as the standard tool to describe them. +The continuous time problem explained in the introduction was initially addressed by mathematics, which represents physical quantities by \textit{differential equations}. This set of equations establishes a relationship between functions and their respective derivatives; the function express the variable of interest and its derivative describe how it changes over time. It is common in the engineering and in the physics domain to know the rate of change of a given variable, but the function itself is still unknown. These variables describe the state of the system, e.g, velocity, water flow, electrical current, etc. When those variables are allowed to vary continuously --- in arbitrarily small increments --- differential equations arise as the standard tool to describe them. While some differential equations have more than one independent variable per function, being classified as a \textit{partial differential equation}, some phenomena can be modeled with only one independent variable per function in a given set, being described as a set of \textit{ordinary differential equations}. However, because the majority of such equations does not have an analytical solution, i.e., cannot be described as a combination of other analytical formulas, numerical procedures are used to solve the system. These mechanisms \textit{quantize} the physical time duration into an interval of numbers, each spaced by a \textit{time step} from the other, and the sequence starts from an \textit{initial value}. Afterward, the derivative is used to calculate the slope or the direction in which the tangent of the function is moving in time in order to predict the value of the next step, i.e., determine which point better represents the function in the next time step. The order of the method varies its precision during the prediction of the steps, e.g, the Runge-Kutta method of 4th order is more precise than the Euler method or the Runge-Kutta of 2nd order. -These numerical methods are used to solve problems purespecified by the following mathematical relations: +These numerical methods are used to solve problems specified by the following mathematical relations: \begin{equation} \dot{y}(t) = f(t, y(t)) \quad y(t_0) = y_0 \label{eq:diffEq} \end{equation} -As showed, both the derivative and the function --- the mathematical formulation of the system --- varies according to \textit{time}. Both acts as functions in which for a given time value, it produces a numerical outcome. Moreover, this equality assumes that the next step following the derivative's direction will not be that different from the actual value of the function $y$ if the time step is small enough. Further, it is assumed that in case of a small enough time step, the difference between time samples is $h$, i.e., the time step. In order to model this mathematical relationship between the functions and its repurespective derivative, these methods use iteration-based approximations. For instance, the following equation represents one step of the first-order Euler method, the simplest numerical method: +As showed, both the derivative and the function --- the mathematical formulation of the system --- varies according to \textit{time}. Both acts as functions in which for a given time value, it produces a numerical outcome. Moreover, this equality assumes that the next step following the derivative's direction will not be that different from the actual value of the function $y$ if the time step is small enough. Further, it is assumed that in case of a small enough time step, the difference between time samples is $h$, i.e., the time step. In order to model this mathematical relationship between the functions and its respective derivative, these methods use iteration-based approximations. For instance, the following equation represents one step of the first-order Euler method, the simplest numerical method: \begin{equation} y_{n + 1} = y_n + hf(t_n, y_n) diff --git a/doc/MastersThesis/Lhs/Enlightenment.lhs b/doc/MastersThesis/Lhs/Enlightenment.lhs index 8892444..5993fac 100644 --- a/doc/MastersThesis/Lhs/Enlightenment.lhs +++ b/doc/MastersThesis/Lhs/Enlightenment.lhs @@ -34,7 +34,7 @@ oldLorenzSystem = runCTFinal oldLorenzModel 100 lorenzSolver \end{code} } -Previously, we presented in detail the latter core type of the implementation, the \texttt{Integrator}, as well as why it can model an integral when used with the \texttt{CT} type. This Chapter is a follow-up, and its objectives are threefold: describe how to map a set of differential equations to an executable model, reveal which functions execute a given example and present a guided-example as a proof-of-concept. +Previously, we presented in detail the latter core type of the implementation, the integrator, as well as why it can model an integral when used with the \texttt{CT} type. This Chapter is a follow-up, and its objectives are threefold: to describe how to map a set of differential equations to an executable model, to reveal which functions execute a given example and to present a guided-example as a proof-of-concept. \section{From Models to Models} @@ -117,7 +117,7 @@ Finally, when creating a model, the same steps have to be done in the same order \begin{center} \includegraphics[width=0.97\linewidth]{MastersThesis/img/ModelPipeline} \end{center} -\caption{When building a model for simulation, the above pipeline is always used, from both points of view. The operations with meaning, i.e., the ones in the \texttt{Semantics} pipeline, are mapped to executable operations in the \texttt{Operational} pipeline, and vice-versa.} +\caption[Execution pipeline of a model.]{When building a model for simulation, the above pipeline is always used, from both points of view. The operations with meaning, i.e., the ones in the \texttt{Semantics} pipeline, are mapped to executable operations in the \texttt{Operational} pipeline, and vice-versa.} \label{fig:modelPipe} \end{figure} @@ -141,15 +141,15 @@ runCT m t sl = \end{spec} On line 3, we convert the final \textit{time value} for the simulation into an interval value for the simulation (\texttt{iv}) --- the simulation always starts at 0 and goes all the -way up to the requested time. Next up, on line 4, we convert the interval to an \textit{iteration} interval in the format of a tuple, i.e., the continuous interval becomes the tuple $(0, \frac{stopTime - startTime}{timeStep})$, in which the second value of the tuple is \textit{rounded}. From line 5 to line 11, we are defining an auxiliary function \textit{parameterise}. This function picks a natural number, which represents the iteration +way up to the requested time. Next up, on line 4, we convert the interval to an \textit{iteration} interval in the format of a tuple, i.e., the continuous interval becomes the tuple $(0, \frac{stopTime - startTime}{timeStep})$, in which the second value of the tuple is \textit{rounded}. From line 5 to line 11, we are defining an auxiliary function \textit{parameterize}. This function picks a natural number, which represents the iteration index, and creates a new record with the type \texttt{Parameters}. Additionally, it uses the auxiliary function \textit{iterToTime} (line 7), which converts the iteration number from the domain of discrete \textit{steps} to the domain of \textit{discrete time}, i.e., the time the solver methods can operate with (Chapter 5 will explore more of this concept). This conversion is based on the time step being used, as well as which method and in which stage it is for that specific iteration. Finally, line 13 produces the outcome of the \textit{runCT} function. The final result is the output from a function called \textit{map} piped it as an argument for the \textit{sequence} function. -The \textit{map} operation is provided by the \texttt{Functor} of the list monad, and it applies an arbitrary function to the internal members of a list in a \textit{sequential} manner. In this case, the \textit{parameterise} function, composed with the continuous machine \texttt{m}, is the one being mapped. Thus, a custom value of the type \texttt{Parameters} is taking place of each natural natural number in the list, and this is being applied to the received \texttt{CT} value. It produces a list of answers in order, each one wrapped in the \texttt{IO} monad. To abstract out the \texttt{IO}, thus getting \texttt{IO [a]} rather than \texttt{[IO a]}, the \textit{sequence} function finishes the implementation. Additionally, there is an analogous implementation of this function, so-called \textit{runCTFinal}, that return only the final result of the simulation instead of the outputs at the time step samples. +The \textit{map} operation is provided by the \texttt{Functor} of the list monad, and it applies an arbitrary function to the internal members of a list in a \textit{sequential} manner. In this case, the \textit{parameterise} function, composed with the continuous machine \texttt{m}, is the one being mapped. Thus, a custom value of the type \texttt{Parameters} is taking place of each natural natural number in the list, and this is being applied to the received \texttt{CT} value. It produces a list of answers in order, each one wrapped in the \texttt{IO} monad. To abstract out the \texttt{IO}, thus getting \texttt{IO [a]} rather than \texttt{[IO a]}, the \textit{sequence} function finishes the implementation. Additionally, there is an analogous implementation of this function, so-called \textit{runCTFinal}, that return only the final result of the simulation instead of the outputs at the time step samples. The next section will provide an example of this in a step-by-step manner. \section{An attractive example} -For the example walkthrough, the same example introduced in the Chapter \textit{Introduction} will be used in this Section. So, we will be solving a system, composed by a set of chaotic solutions, called \textit{the Lorenz Attractor}. In these types of systems, the ordinary differential equations are used to model chaotic systems, providing solutions based on parameter values and initial conditions. The original differential equations are presented bellow: +For the example walkthrough, the same example introduced in the Chapter \textit{Introduction} will be used in this Section. So, we will be solving a simpler system for demonstration purposes, composed by a set of chaotic solutions, called \textit{the Lorenz Attractor}. In these types of systems, the ordinary differential equations are used to model chaotic systems, providing solutions based on parameter values and initial conditions. The original differential equations are presented bellow: $$ \sigma = 10.0 $$ $$ \rho = 28.0 $$ @@ -195,9 +195,9 @@ lorenzSystem = runCT lorenzModel 100 lorenzSolver The first records, \texttt{Solver}, sets the environment (lines 1 to 4). It configures the solver with $0.01$ seconds as the time step, whilst executing the second-order Runge-Kutta method from the initial stage (lines 3 to 6). The \textit{lorenzModel}, presented after setting the constants (lines 6 to 8), executes the aforementioned pipeline to create the model: allocate memory (lines 12 to 14), create read-only pointers (lines 15 to 17), change the computation (lines 18 to 20) and dispatch it (line 21). Finally, the function \textit{lorenzSystem} groups everything together calling the \textit{runCT} driver (line 22). -After this overview, let's follow the execution path used by the compiler. Haskell's compiler works in a lazily manner, meaning that it calls for execution only the necessary parts. So, the first step calling \textit{lorenzSystem} is to call the \textit{runCT} function with a model, final time for the simulation and solver configurations. Following its path of execution, the \textit{map} function (inside the driver) forces the application of a parametric record generated by the \textit{parameterise} function to the provided model, \textit{lorenzModel} in this case. Thus, it needs to be executed in order to return from the \textit{runCT} function. +After this overview, let's follow the execution path used by the compiler. Haskell's compiler works in a lazily manner, meaning that it calls for execution only the necessary parts. So, the first step calling \textit{lorenzSystem} is to call the \textit{runCT} function with a model, final time for the simulation and solver configurations. Following its path of execution, the \textit{map} function (inside the driver) forces the application of a parametric record generated by the \textit{parameterize} function to the provided model, \textit{lorenzModel} in this case. Thus, it needs to be executed in order to return from the \textit{runCT} function. -To understand the model, we need to follow the execution sequence of the output: \texttt{sequence [x, y, z]}, which requires executing all the lines before this line to obtain the all the state variables. For the sake of simplicity, we will follow the execution of the operations related to the $x$ variable, given that the remaining variables have an analogous execution walkthrough. First and foremost, memory is allocated for the integrator to work with (line 12). Figure \ref{fig:allocateExample} depicts this idea, as well as being a reminder of what the \textit{createInteg} and \textit{initialize} functions do, described in the Chapter \textit{Effectful Integrals}. In this image, the integrator \texttt{integX} comprises two fields, \texttt{initial} and \texttt{computation}. The former is a simple value of the type \texttt{CT Double} that, regardless of the parameters record it receives, it returns the initial condition of the system. The latter is a pointer or address that references a specific \texttt{CT Double} computation in memory: in the case of receiving a parametric record \texttt{ps}, it fixes potential problems with it via the \texttt{initialize} block, and it applies this fixed value in order to get \texttt{i}, i.e., the initial value $1$, the same being saved in the other field of the record, \texttt{initial}. +To understand the model, we need to follow the execution sequence of the output: \texttt{sequence [x, y, z]}, which requires executing all the lines before this line to obtain all the state variables. For the sake of simplicity, we will follow the execution of the operations related to the $x$ variable, given that the remaining variables have an analogous execution walkthrough. First and foremost, memory is allocated for the integrator to work with (line 12). Figure \ref{fig:allocateExample} depicts this idea, as well as being a reminder of what the \textit{createInteg} and \textit{initialize} functions do, described in the Chapter \textit{Effectful Integrals}. In this image, the integrator \texttt{integX} comprises two fields, \texttt{initial} and \texttt{computation}. The former is a simple value of the type \texttt{CT Double} that, regardless of the parameters record it receives, returns the initial condition of the system. The latter is a pointer or address that references a specific \texttt{CT Double} computation in memory: in the case of receiving a parametric record \texttt{ps}, it fixes potential problems with it via the \texttt{initialize} block, and it applies this fixed value in order to get \texttt{i}, i.e., the initial value $1$, the same being saved in the other field of the record, \texttt{initial}. \figuraBib{ExampleAllocate}{After \textit{createInteg}, this record is the final image of the integrator. The function \textit{initialize} gives us protecting against wrong records of the type \texttt{Parameters}, assuring it begins from the first iteration, i.e., $t_0$}{}{fig:allocateExample}{width=.90\textwidth}% @@ -229,7 +229,7 @@ It is worth mentioning that the dependency \texttt{c} is a call of a \textit{sol \section{Lorenz's Butterfly} -After all the explained theory behind the project, it remains to be seen if this can be converted into practical results. With certain constant values, the generated graph of the Lorenz's Attractor example used in the last Chapter is known for oscillation and getting the shape of two fixed point attractors, meaning that the system evolves to an oscillating state even if slightly disturbed. As showed in Figure \ref{fig:lorenzPlots}, the obtained graph from the Lorenz's Attractor model matches what was expected for a Lorenz's system. It is worth noting that changing the values of $\sigma$, $\rho$ and $\beta$ can produce completely different answers, destroying the resembled "butterfly" shape of the graph. Although correct, the presented solution has a few drawbacks. The next three chapters will explain and address the identified problems with the current implementation. +After all the explained theory behind the project, it remains to be seen if this can be converted into practical results. As depicted in Figure \ref{fig:lorenzPlots}, the obtained graph from the Lorenz's Attractor model matches what was expected for a Lorenz's system. It is worth noting that changing the values of $\sigma$, $\rho$ and $\beta$ can produce completely different answers, destroying the resembled "butterfly" shape of the graph. Although correct, the presented solution has a few drawbacks. The next three chapters will explain and address the identified problems with the current implementation. \figuraBib{LorenzPlot1}{The Lorenz's Attractor example has a very famous butterfly shape from certain angles and constant values in the graph generated by the solution of the differential equations.}{}{fig:lorenzPlots}{width=.90\textwidth}% diff --git a/doc/MastersThesis/Lhs/Fixing.lhs b/doc/MastersThesis/Lhs/Fixing.lhs index 9307b76..f4ac47f 100644 --- a/doc/MastersThesis/Lhs/Fixing.lhs +++ b/doc/MastersThesis/Lhs/Fixing.lhs @@ -25,7 +25,7 @@ will present \textit{FFACT}, an evolution of FACT which aims to reduce the noise Chapter 4, \textit{Execution Walkthrough}, described the semantics and usability on an example of a system in mathematical specification and its mapping to a simulation-ready description provided by FACT. -Below we have this example modeled using FACT (same code as provided in Section~\ref{sec:intro}): +We have this example modeled using FACT (same code as provided in Section~\ref{sec:intro}): % \vspace{0.1cm} \begin{spec} @@ -60,7 +60,15 @@ a specific sequence of steps to complete a model for any simulation: \item Update integrators with the actual ODEs of interest (via the use of \textit{updateInteg}). \end{enumerate} -Visually, this step-by-step list for FACT's models follow the pattern detailed in Figure~\ref{fig:modelPipe} in Chapter 4, \textit{Execution Walkthrough}. +Visually, this step-by-step list for FACT's models follow the pattern detailed in Figure~\ref{fig:modelPipe} in Chapter 4, \textit{Execution Walkthrough}: + +\begin{figure}[H] +\begin{center} +\includegraphics[width=0.97\linewidth]{MastersThesis/img/ModelPipeline} +\end{center} +\caption[Execution pipeline of a model.]{Pipeline of execution when creating a model in \texttt{FACT}.} +\end{figure} + More importantly, \emph{all} those steps are visible and transparent from an usability's point of view. Hence, a system's designer \emph{must} be aware of this \emph{entire} sequence of mandatory steps, even if his interest probably only relates to lines 12 to 14. Although one's goal is being able to specify a system and start a simulation, there is no escape -- one has to bear the noise created due to @@ -75,14 +83,14 @@ required piece to get rid of the \texttt{Integrator} type, thus also removing it \section{The Fixed-Point Combinator} \label{subsec:fix} -It is worth noting that the term \textit{fixed-point} has different meanings in the domains of engineering and mathematics. When referecing the +It is worth noting that the term \textit{fixed-point} has different meanings in the domains of engineering and mathematics. When referencing the fractional representations within a computer, one may use the \textit{fixed-point method}. Thus, to avoid confusion, the following is the definition of such concept in this dissertation, alongside a set of examples of its use case as a mathematical combinator that can be used to implement recursion. On the surface, the fixed-point combinator is a simple mapping that fulfills the following property: a point \emph{p} is a fixed-point of a function \emph{f} if \emph{f(p)} lies on the identity function, i.e., \emph{f(p) = p}. Not all functions have fixed-points, and some functions may have more than one~\cite{tennent1991}. -Further, we seek to establish theorems and algorithms in which one can guarantees fixed-points and their uniqueness, such as the Banach fixed-point theorem~\cite{bryant1985}. +Further, we seek to establish theorems and algorithms in which one can guarantee fixed-points and their uniqueness, such as the Banach fixed-point theorem~\cite{bryant1985}. In programming terms, by following specific requirements one could find the fixed-point of a function via an iterative process that involves going back and forth between it and the identity function until the difference in outcomes is less than or equal to an arbitrary~$\epsilon$. @@ -158,8 +166,7 @@ Furthermore, this process can be used in conjunction with monadic operations as \end{purespec} \vspace{-0.1cm} % -This combination, however, cannot address \emph{all} cases when using side-effects. -In the above, executing the side-effect in \texttt{countDown} do not contribute to its own \emph{definition}. +This combination, however, cannot address \emph{all} cases when using side-effects. Executing the side-effect in \texttt{countDown} do not contribute to its own \emph{definition}. There is no construct or variable that requires the side-effect to be executed in order to determine its meaning. This ability -- being able to set values based on the result of running side-effects whilst keep the fixed-point running -- is something of interest because, as we are about to see, this allows the use of \emph{cyclic} definitions. @@ -195,7 +202,7 @@ The former case, however, needs a special kind of recursion, so-called \emph{val As we are about to understand on Section~\ref{sec:ffact}, the use of value recursion to have monadic's bindings with the same convenience of \texttt{letrec} will be the key to our improvement on FFACT over FACT. Fundamentally, it will \emph{tie the recursion knot} done in FACT via the complicated implicit recursion mentioned in Section~\ref{sec:integrator}. -In terms of implementation, this is being achieved by the use of the \texttt{mfix} construct~\cite{levent2000}, which is accompained by a \emph{recursive do} syntax sugar~\cite{levent2002}, with the caveat of not being able to do shadowing -- much like the \texttt{let} and \texttt{where} clauses in Haskell. +In terms of implementation, this is being achieved by the use of the \texttt{mfix} construct~\cite{levent2000}, which is accompanied by a \emph{recursive do} syntax sugar~\cite{levent2002}, with the caveat of not being able to do shadowing -- much like the \texttt{let} and \texttt{where} clauses in Haskell. In order for a type to be able to use this construct, it should follow specific algebraic laws~\cite{leventThesis} to then implement the \texttt{MonadFix} type class found in \texttt{Control.Monad.Fix}~\footnote{\texttt{Control.Monad.Fix} \href{https://hackage.haskell.org/package/base-4.21.0.0/docs/Control-Monad-Fix.html}{\textcolor{blue}{hackage documentation}}.} package: % %% \vspace{-0.8cm} @@ -239,7 +246,7 @@ updateInteg integ diff = do liftIO $ writeIORef (computation integ) z \end{purespec} -\figuraBib{createInteg}{Diagram of \texttt{createInteg} primitive for intuition.}{}{fig:createIntegDiagram}{width=.97\textwidth}% +\figuraBib{createInteg}{Diagram of \texttt{createInteg} primitive for intuition}{}{fig:createIntegDiagram}{width=.97\textwidth}% \section{Tweak IV: Fixing FACT} \label{sec:ffact} @@ -307,7 +314,7 @@ integ diff i = \end{code} \vspace{-0.2cm} % -This new functin received the differential equation of interest, named \texttt{diff}, and the initial condition of the simulation, identified +This new function received the differential equation of interest, named \texttt{diff}, and the initial condition of the simulation, identified as \texttt{i}, on line 2. Interpolation and memoization requirements from FACT are being maintained, as shown on line 3. Lines 3 to 6 demonstrate the use case for FFACT's \texttt{mdo}. A continuous machine created by the memoization function (line 3), \texttt{y}, uses another continuous machine, \texttt{z}, yet to be defined. This continuous machine, defined on line 4, retrieves the numerical method chosen by a value of type \texttt{Parameters}, via the function \texttt{f}. @@ -329,7 +336,7 @@ lorenzSystem = runCT lorenzModel 100 lorenzSolver Not surprisingly, the results of this new approach using the monadic fixed-point combinator are very similar to the performance metrics depicted in Chapter 6, \textit{Caching the Speed Pill} --- indicating that we are \textit{not} trading performance -for a gain in conciseness. Figure~\ref{fig:fixed-graph} shows the new results: +for a gain in conciseness. Figure~\ref{fig:fixed-graph} shows the new results. \figuraBib{Graph3}{Results of FFACT are similar to the final version of FACT.}{}{fig:fixed-graph}{width=.97\textwidth}% diff --git a/doc/MastersThesis/Lhs/Implementation.lhs b/doc/MastersThesis/Lhs/Implementation.lhs index 6a6eb6c..b966633 100644 --- a/doc/MastersThesis/Lhs/Implementation.lhs +++ b/doc/MastersThesis/Lhs/Implementation.lhs @@ -9,7 +9,7 @@ import Control.Monad.Trans.Reader \end{code} } -This Chapter details the next steps to simulate continuous-time behaviours. It starts by enhancing the previously defined \texttt{CT} type by implementing some specific typeclasses. Next, the second core type of the simulation, the \texttt{Integrator} type, will be introduced alongside its functions. These improvements will then be compared to FF-GPAC's basic units, our source of formalism within the project. At the end of the Chapter, an implicit recursion will be blended with a lot of effectful operations, making the \texttt{Integrator} type hard to digest. This will be addressed by a guided Lorenz Attractor example in the next Chapter, \textit{Execution Walkthrough}. +This Chapter details the next steps to simulate continuous-time behaviours using more advanced Haskell concepts, like typeclasses~\footnote{\texttt{Classes in Haskell:} \href{https://www.haskell.org/tutorial/classes.html}{\textcolor{blue}{reference}}.}. It starts by enhancing the previously defined \texttt{CT} type by implementing some specific typeclasses. Next, the second core type of the simulation, the \texttt{Integrator} type, will be introduced alongside its functions. These improvements will then be compared to FF-GPAC's basic units, our source of formalism within the project. At the end of the Chapter, an implicit recursion will be blended with a lot of effectful operations, making the \texttt{Integrator} type hard to digest. This will be addressed by a guided Lorenz Attractor example in the next Chapter, \textit{Execution Walkthrough}. \section{Uplifting the CT Type} \label{sec:typeclasses} @@ -23,7 +23,7 @@ The typeclasses \texttt{Functor}, \texttt{Applicative} and \texttt{Monad} are al Given that the \texttt{CT} type is just a type alias with \texttt{ReaderT} as the under the hood type, all of these lift operations are already provided in Haskell's libraries. However, it is still valuable to present their implementation to completely understand how the final look for the DSL will look like. Hence, the following implementations will -assume we \textit{aren't} use CT as the type alias and instead we will be showing the implementations as if we are using the definition used previously~\cite{Lemos2022} for the +assume we \textit{aren't} using CT as the type alias and instead we will be showing the implementations as if we are using the definition used previously~\cite{Lemos2022} for the \texttt{CT} type: \begin{purespec} @@ -46,7 +46,7 @@ instance Functor CT where \label{fig:functor} \end{figure} -The next typeclass, \texttt{Applicative}, deals with functions that are inside the \texttt{CT} type. When implemented (again, referring to the non-type-alias version), this algebraic operation lifts this internal function, wrapped by the type of choice, applying the \textit{external} type to its \textit{internal} members, thus generating again a function with the signature \texttt{CT a -> CT b}. The minimum requirements for this typeclass is the function \textit{pure}, a function responsible for wrapping any value with the \texttt{CT} wrapper, and the \texttt{<*>} operator, which does the aforementioned interaction between the internal values with the outer shell. The implementation of this typeclass is presented in the code bellow, in which the dependency \texttt{df} has the signature \texttt{CT (a -> b)} and its internal function \texttt{a -> b} is being lifted to the \texttt{CT} type. Figure \ref{fig:applicative} illustrates the described lifting with \texttt{Applicative}. +The next typeclass, \texttt{Applicative}, deals with functions that are inside the \texttt{CT} type. When implemented (again, referring to the non-type-alias version), this algebraic operation lifts this internal function, wrapped by the type of choice, applying the \textit{external} type to its \textit{internal} members, thus generating again a function with the signature \texttt{CT a -> CT b}. The minimum requirements for this typeclass is the function \textit{pure}, a function responsible for wrapping any value with the \texttt{CT} wrapper, and the \texttt{<*>} operator, which does the aforementioned interaction between the internal values with the outer shell. The implementation of this typeclass has the dependency \texttt{df} has the signature \texttt{CT (a -> b)} and its internal function \texttt{a -> b} is being lifted to the \texttt{CT} type. Figure \ref{fig:applicative} illustrates the described lifting with \texttt{Applicative}. \begin{figure}[ht!] \begin{minipage}{.55\textwidth} @@ -98,7 +98,7 @@ bind k (CT m) \label{fig:monad} \end{figure} -Aside from lifting operations, the final typeclass related to data manipulation is the \texttt{MonadIO} typeclass. It comprises only one function, \textit{liftIO}, and its purpose is to change the structure that is wrapping the value, going from an \texttt{IO} outer shell to the monad of interest, \texttt{CT} in this case. The usefulness of this typeclass will be more clear in the next topic, Section \ref{sec:integrator}. The implementation is bellow, alongside its visual representation in Figure \ref{fig:monadIO}. Once again, consider the explicit +Aside from lifting operations, the final typeclass related to data manipulation is the \texttt{MonadIO} typeclass. It comprises only one function, \textit{liftIO}, and its purpose is to change the structure that is wrapping the value, going from an \texttt{IO} outer shell to the monad of interest, \texttt{CT} in this case. The usefulness of this typeclass will be more clear in the next topic, Section \ref{sec:integrator}. The implementation follows, alongside its visual representation in Figure \ref{fig:monadIO}. Once again, consider the explicit definition for the \texttt{CT} type instead of the type alias. \begin{figure}[ht!] @@ -117,8 +117,8 @@ instance MonadIO CT where \label{fig:monadIO} \end{figure} -Finally, there are the typeclasses related to mathematical operations. The typeclasses \texttt{Num}, \texttt{Fractional} and \texttt{Floating} provide unary and binary numerical operations, such as arithmetic operations and trigonometric functions. However, because we want to use them with the \texttt{CT} type, their implementation involve lifting. Further, the \texttt{Functor} and \texttt{Applicative} typeclasses allow us to execute this lifting, since they are designed for this purpose. The code bellow depicts the implementation for unary and binary operations, which are used in the requirements for those typeclasses. As a side note, to make these implementations possible for the type-aliased version of the \texttt{CT} type, it is -required to use a compiler extension \texttt{FlexibleInstances}. Further, the same operations below can be used as internal helpers for both versions of the type: +Finally, there are the typeclasses related to mathematical operations. The typeclasses \texttt{Num}, \texttt{Fractional} and \texttt{Floating} provide unary and binary numerical operations, such as arithmetic operations and trigonometric functions. However, because we want to use them with the \texttt{CT} type, their implementation involve lifting. Further, the \texttt{Functor} and \texttt{Applicative} typeclasses allow us to execute this lifting, since they are designed for this purpose. The following code depicts the implementation for unary and binary operations, which are used in the requirements for those typeclasses. As a side note, to make these implementations possible for the type-aliased version of the \texttt{CT} type, it is +required to use a compiler extension \texttt{FlexibleInstances}. Further, the same operations can be used as internal helpers for both versions of the type: \begin{purespec} unaryOP :: (a -> b) -> CT a -> CT b @@ -134,7 +134,7 @@ After these improvements in the \texttt{CT} type, it is possible to map some of First and foremost, all FF-GPAC units receive \textit{time} as an available input to compute. The \texttt{CT} type represents continuous physical dynamics~\cite{LeeModeling}, which means that it portrays a function from time to physical output. Hence, it already has time embedded into its definition; a record with type \texttt{Parameters} is received as a dependency to obtain the final result at that moment. Furthermore, it remains to model the FF-GPAC's black boxes and the composition rules that bind them together. -The simplest unit of all, \texttt{Constant Unit}, can be achieved via the implementation of the \texttt{Applicative} and \texttt{Num} typeclasses. First, this unit needs to receive the time of simulation at that point, which is an granted by the \texttt{CT} type. Next, it needs to return a constant value $k$ for all moments in time. The \texttt{Num} given the \texttt{CT} type the option of using number representations, such as the types \texttt{Int}, \texttt{Integer}, \texttt{Float} and \texttt{Double}. Further, the \texttt{Applicative} typeclass can lift those number-related functions to the desired type by using the \textit{pure} function. +The simplest unit of all, \texttt{Constant Unit}, can be achieved via the implementation of the \texttt{Applicative} and \texttt{Num} typeclasses. First, this unit needs to receive the time of simulation at that point, which is granted by the \texttt{CT} type. Next, it needs to return a constant value $k$ for all moments in time. The \texttt{Num} given the \texttt{CT} type the option of using number representations, such as the types \texttt{Int}, \texttt{Integer}, \texttt{Float} and \texttt{Double}. Further, the \texttt{Applicative} typeclass can lift those number-related functions to the desired type by using the \textit{pure} function. Arithmetic basic units, such as the \texttt{Adder Unit} and the \texttt{Multiplier Unit}, are being modeled by the \texttt{Functor}, \texttt{Applicative} and \texttt{Num} typeclasses. Those two units use binary operations with physical signals. As demonstrated in the previous Section, the combination of numerical and lifting typeclasses let us to model such operations. Figure \ref{fig:gpacBind1} shows FF-GPAC's analog circuits alongside their \texttt{FACT} counterparts. The forth unit and the composition rules will be mapped after describing the second main type of \texttt{FACT}: the \texttt{Integrator} type. @@ -150,7 +150,7 @@ The \texttt{CT} type directly interacts with a second type that intensively expl \includegraphics[width=0.95\linewidth]{MastersThesis/img/StateMachine} \end{minipage}\hfill \begin{minipage}[c]{0.32\textwidth} - \caption{State Machines are a common abstraction in computer science due to its easy mapping between function calls and states. Memory regions and peripherals are embedded with the idea of a state, not only pure functions. Further, side effects can even act as the trigger to move from one state to another, meaning that executing a simple function can do more than return a value. Its internal guts can significantly modify the state machine.} + \caption[Example of a State Machine]{State Machines are a common abstraction in computer science due to its easy mapping between function calls and states. Memory regions and peripherals are embedded with the idea of a state, not only pure functions. Further, side effects can even act as the trigger to move from one state to another, meaning that executing a simple function can do more than return a value. Its internal guts can significantly modify the state machine.} \label{fig:stateMachine} \end{minipage} \end{figure} @@ -167,7 +167,7 @@ data Integrator = Integrator { initial :: CT Double, } \end{purespec} -There are three functions that involve the \texttt{Integrator} and the \texttt{CT} types together: the function \textit{createInteg}, responsible for allocating the memory that the pointer will pointer to, \textit{readInteg}, letting us to read from the pointer, and \textit{updateInteg}, a function that alters the content of the region being pointed. In summary, these functions allow us to create, read and update data from that region, if we have the pointer on-hand. All functions related to the integrator use what's known as \texttt{do-notation}, a syntax sugar of the \texttt{Monad} typeclass for the bind operator. The code bellow is the implementation of the \textit{createInteg} function, which creates an integrator: +There are three functions that involve the \texttt{Integrator} and the \texttt{CT} types together: the function \textit{createInteg}, responsible for allocating the memory that the pointer will point to, \textit{readInteg}, letting us to read from the pointer, and \textit{updateInteg}, a function that alters the content of the region being pointed. In summary, these functions allow us to create, read and update data from that region, if we have the pointer on-hand. All functions related to the integrator use what's known as \texttt{do-notation}, a syntax sugar of the \texttt{Monad} typeclass for the bind operator. The following code is the implementation of the \textit{createInteg} function, which creates an integrator: \begin{spec} createInteg :: CT Double -> CT Integrator @@ -180,7 +180,7 @@ createInteg i = do The first step to create an integrator is to manage the initial value, which is a function with the type \texttt{Parameters -> IO Double} wrapped in \texttt{CT} via the \texttt{ReaderT}. After acquiring a given initial value \texttt{i}, the integrator needs to assure that any given parameter record is the beginning of the computation process, i.e., it starts from $t_0$. The \texttt{initialize} function (line 3) fulfills this role, doing a reset in \texttt{time}, \texttt{iteration} and \texttt{stage} in a given parameter record. This is necessary because all the implemented solvers presumes \textit{sequential steps}, starting from the initial condition. So, in order to not allow this error-prone behaviour, the integrator makes sure that the initial state of the system is configured correctly. The next step is to allocate memory to this computation --- a procedure that will get you the initial value, while modifying the parameter record dependency of the function accordingly. -The following stage is to do a type conversion, given that in order to create the \texttt{Integrator} record, it is necessary to have the type \texttt{IORef (CT Double)}. At first glance, this can seem to be an issue because the result of the \textit{newIORef} function is wrapped with the \texttt{IO} monad~\footnote{\label{foot:IORef} \texttt{IORef} \href{https://hackage.haskell.org/package/base-4.16.1.0/docs/Data-IORef.html}{\textcolor{blue}{hackage documentation}}.}. This conversion is the reason why the \texttt{IO} monad is being used in the implementation, and hence forced us to implement the typeclass \texttt{MonadIO}. The function \texttt{liftIO} (liine 3) is capable of removing the \texttt{IO} wrapper and adding an arbitrary monad in its place, \texttt{CT} in this case. So, after line 3 the \texttt{comp} value has the desired \texttt{CT} type. The remaining step of this creation process is to construct the integrator itself by building up the record with the correct fields, e.g., the CT version of the initial value and the pointer to the constructed computation written in memory (lines 4 and 5). +The following stage is to do a type conversion, given that in order to create the \texttt{Integrator} record, it is necessary to have the type \texttt{IORef (CT Double)}. At first glance, this seems to be an issue because the result of the \textit{newIORef} function is wrapped with the \texttt{IO} monad~\footnote{\label{foot:IORef} \texttt{IORef} \href{https://hackage.haskell.org/package/base-4.16.1.0/docs/Data-IORef.html}{\textcolor{blue}{hackage documentation}}.}. This conversion is the reason why the \texttt{IO} monad is being used in the implementation, and hence forced us to implement the typeclass \texttt{MonadIO}. The function \texttt{liftIO} (liine 3) is capable of removing the \texttt{IO} wrapper and adding an arbitrary monad in its place, \texttt{CT} in this case. So, after line 3 the \texttt{comp} value has the desired \texttt{CT} type. The remaining step of this creation process is to construct the integrator itself by building up the record with the correct fields, e.g., the CT version of the initial value and the pointer to the constructed computation written in memory (lines 4 and 5). \begin{purespec} readInteg :: Integrator -> CT Double @@ -206,8 +206,8 @@ updateInteg integ diff = do \end{spec} In the beginning of the function (line 3), we extract the initial value from the integrator, so-called \texttt{i}. Next (line 4 onward), -create a new computation, so-called \texttt{z} --- a function wrapped in the \texttt{CT} type that receives a \texttt{Parameters} record and computes the result based on the solving method. -Because this computation needs to do lookups on some configuration values, we use the function \texttt{ask} (line 5) from \texttt{ReaderT} to get our environment values; this case +we create a new computation, so-called \texttt{z} --- a function wrapped in the \texttt{CT} type that receives a \texttt{Parameters} record and computes the result based on the solving method. +Because this computation needs to do lookups on some configuration values, we use the function \texttt{ask} (line 5) from \texttt{ReaderT} to get our environment values; in this case a value of type \texttt{Parameters}. Later on, the follow-up step is to build a copy of the \textit{same process} being pointed by the \texttt{computation} pointer (line 6). Finally, after checking the chosen solver (line 7), it is executed one iteration of the process by calling \textit{integEuler}, or \textit{integRK2} or \textit{integRK4}. After line 10, this entire process \texttt{z} is being pointed by the \texttt{computation} pointer, being done by the $writeIORef$ function~\footref{foot:IORef}. It may seem confusing that inside \texttt{z} we are \textit{reading} what is being pointed and later, on the last line of \textit{updateInteg}, this is being used on the final line to update that same pointer. This is necessary, as it will be explained in the next Chapter \textit{Execution Walkthrough}, to allow the use of an \textit{implicit recursion} to assure the sequential aspect needed by the solvers. For now, the core idea is this: the \textit{updateInteg} function alters the \textit{future} computations; it rewrites which procedure will be pointed by the \texttt{computation} pointer. This new procedure, which we called \texttt{z}, creates an intermediate computation, \texttt{whatToDo} (line 6), that \textit{reads} what this pointer is addressing, which is \texttt{z} itself. @@ -230,7 +230,7 @@ The preceding rules include defining connections with polynomial circuits --- an Going back to the type signature of the \textit{updateInteg}, \texttt{Integrator -> CT Double -> CT ()}, we can interpret this function as a \textit{wiring} operation. This function connects as an input of the integrator, represented by the \textit{Integrator} type, the output of a polynomial circuit, represented by the value with \texttt{CT Double} type. Because the operation is just setting up the connections between the two, the functions ends with the type \texttt{CT ()}. -A polynomial circuit can have the time $t$ or an output of another integrator as inputs, with restricted feedback (rule 1). This rule is being matched by the following: the \texttt{CT} type makes time available to the circuits, and the \textit{readInteg} function allows us to read the output of another integrators. The second rule, related to multiple inputs in the combinational circuit, is being followed because we can link inputs using arithmetic operations, feature provided by the \texttt{Num} typeclass. Moreover, because the sole purpose of \texttt{FACT} is to solve differential equations, we are \textit{only} interested in circuits that calculates integrals, meaning that it is guaranteed that the integrand of the integrator will always be the output of a polynomial unit (rule 3), as we saw with the type signature of the \textit{updateInteg} function. The forth rule is also being attended it, given that the solver methods inside the \textit{updateInteg} function always calculate the integral in respect to the time variable. Figure \ref{fig:gpacBind2} summarizes these last mappings between the implementation, and FF-GPAC's integrator and rules of composition. +A polynomial circuit can have the time $t$ or an output of another integrator as inputs, with restricted feedback (rule 1). This rule is being matched by the following: the \texttt{CT} type makes time available to the circuits, and the \textit{readInteg} function allows us to read the output of another integrators. The second rule, related to multiple inputs in the combinational circuit, is being followed because we can link inputs using arithmetic operations, feature provided by the \texttt{Num} typeclass. Moreover, because the sole purpose of \texttt{FACT} is to solve differential equations, we are \textit{only} interested in circuits that calculates integrals, meaning that it is guaranteed that the integrand of the integrator will always be the output of a polynomial unit (rule 3), as we saw with the type signature of the \textit{updateInteg} function. The fourth rule is also being attended it, given that the solver methods inside the \textit{updateInteg} function always calculate the integral in respect to the time variable. Figure \ref{fig:gpacBind2} summarizes these last mappings between the implementation, and FF-GPAC's integrator and rules of composition. \figuraBib{GPACBind2}{The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt{CT} and \texttt{Integrator} types match the four basic units}{}{fig:gpacBind2}{width=.9\textwidth}% @@ -244,7 +244,7 @@ The remaining topic of this Chapter is to describe in detail how the solver meth \begin{itemize} \item Euler Method or First-order Runge-Kutta Method \item Second-order Runge-Kutta Method -\item Forth-order Runge-Kutta Method +\item Fourth-order Runge-Kutta Method \end{itemize} To explain how the solvers work and their nuances, it is useful to go into the implementation of the simplest one --- the Euler method. However, the implementation of the solvers use a slightly different function for the next step or iteration in comparison to the one explained in Chapter 2. Hence, it is worthwhile to remember how this method originally iterates in terms of its mathematical description and compare it to the new function. From equation \ref{eq:nextStep}, we can obtain a different function to next step, by subtracting the index from both sides of the equation: diff --git a/doc/MastersThesis/Lhs/Interpolation.lhs b/doc/MastersThesis/Lhs/Interpolation.lhs index 31afb32..b621638 100644 --- a/doc/MastersThesis/Lhs/Interpolation.lhs +++ b/doc/MastersThesis/Lhs/Interpolation.lhs @@ -61,8 +61,8 @@ iterToTime interv solver n st = delta RungeKutta4 3 = dt solver \end{spec} -A transformation from iteration to time depends on and on the chosen solver method due to their next step functions. -For instance, the second and forth order Runge-Kutta methods have more stages, and it uses fractions of the time step for more granular use of the derivative function. This is why lines 11 and 12 are using half of the time step. Moreover, all discrete time calculations assume that the value starts from the beginning of the simulation (\textit{startTime}). The result is obtained by the sum of the initial value, the solver-dependent \textit{delta} function and the iteration times the solver time step (line 6). +A transformation from iteration to time depends on the chosen solver method due to their next step functions. +For instance, the second and fourth order Runge-Kutta methods have more stages, and it uses fractions of the time step for more granular use of the derivative function. This is why lines 11 and 12 are using half of the time step. Moreover, all discrete time calculations assume that the value starts from the beginning of the simulation (\textit{startTime}). The result is obtained by the sum of the initial value, the solver-dependent \textit{delta} function and the iteration times the solver time step (line 6). There is, however, a missing transition: from the discrete time domain to the domain of interest in CPS --- the continuous time axis. This means that if the time value $t_x$ is not present from the solver point of view, it is not possible to obtain $y(t_x)$. The proposed solution is to add an \textit{interpolation} function into the pipeline, which addresses this transition. Thus, values in between solver steps will be transfered back to the continuous domain. @@ -99,7 +99,7 @@ normally; \texttt{SolverStage} will be used. Next, the driver needs to be updated. So, the proposed mechanism is the following: the driver will identify these corner cases and communicate to the integrator --- via the new \texttt{Stage} field in the \texttt{Solver} data type --- that the interpolation needs to be added into the pipeline of execution. When this flag is not on, i.e., the \texttt{Stage} informs to continue execution normally, the implementation goes as the previous chapters detailed. This behaviour is altered \textit{only} in particular scenarios, which the driver will be responsible for identifying. -It remains to re-implement the driver functions. The driver will notify the integrator that an interpolation needs to take place. The code below shows these changes: +It remains to re-implement the driver functions. The driver will notify the integrator that an interpolation needs to take place. The following code shows these changes: \ignore{ \begin{code} @@ -150,9 +150,9 @@ runCT m t sl = The implementation of \textit{iterationBnds} uses \textit{ceiling} function because this rounding is used to go to the iteration domain. However, given that the interpolation \textit{requires} both solver steps --- the one that came before $t_x$ and the one immediately afterwards --- the number of iterations needs always to surpass the requested time. For instance, the time 5.3 seconds will demand the fifth and sixth iterations with a time step of 1 second. When using \textit{ceiling}, it is assured that the value of interest will be in the interval of computed values. So, when dealing with 5.3, the integrator will calculate all values up to 6 seconds. -Lines 5 to 15 are equal to the previous implementation of the \textit{runCT} function. On line 16, the discrete version of \texttt{t}, \texttt{disct}, will be used for detecting if an +Lines 5 to 15 (from the previous code snippet) are equal to the previous implementation of the \textit{runCT} function. On line 16, the discrete version of \texttt{t}, \texttt{disct}, will be used for detecting if an interpolation will be needed. All the simulation values are being prepared on line 17 --- Haskell being a lazy language the label \texttt{values} will not necessarily -be evaluated strictly. Line 19 establishes a condition, checkiing if the difference between the time of interest \texttt{t} and \texttt{disct} is greater or not +be evaluated strictly. Line 19 establishes a condition, checking if the difference between the time of interest \texttt{t} and \texttt{disct} is greater or not than a value \texttt{epslon}, to identify if the normal flow of execution can proceed. If it can't, on line 22 a new record of type \texttt{Parameters} is created (\texttt{ps}), especifically to these special cases of mismatch between discrete and continuous time. The main difference within this special record is relevant: the stage field of the solver is being set to \texttt{Interpolate}. Finally, on line 25 the last element from the list of outputs \texttt{values} is removed and it is appended the simulation using the created \texttt{ps} with @@ -185,7 +185,7 @@ interpolate m = do in z1 + (z2 - z1) * pure ((t - t1) / (t2 - t1)) \end{code} -Lines 1 to 5 continues the simulation with the normal workflow. If a corner case comes in, the reminaing code applies \textit{linear interpolation} to it. It accomplishes this by first comparing the next and previous discrete times (lines 16 and 19) relative to \texttt{x} (line 11) --- the discrete counterpart of the time of interest \texttt{t} (line 9). These time points are calculated by their correspondent iterations (lines 12 and 13). Then, the integrator calculates the outcomes at these two points, i.e., do applications of the previous and next modeled times points with their respective parametric records (lines 22 and 23). Finally, line 24 executes the linear interpolation with the obtained values that surround the non-discrete time point. This particular interpolation was chosen for the sake of simplicity, but it can be replaced by higher order methods. Figure \ref{fig:interpolate} illustrates the effect of the \textit{interpolate} function when converting domains. +Lines 1 to 5 (from the previuos code snippet) continues the simulation with the normal workflow. If a corner case comes in, the reminaing code applies \textit{linear interpolation} to it. It accomplishes this by first comparing the next and previous discrete times (lines 16 and 19) relative to \texttt{x} (line 11) --- the discrete counterpart of the time of interest \texttt{t} (line 9). These time points are calculated by their correspondent iterations (lines 12 and 13). Then, the integrator calculates the outcomes at these two points, i.e., do applications of the previous and next modeled times points with their respective parametric records (lines 22 and 23). Finally, line 24 executes the linear interpolation with the obtained values that surround the non-discrete time point. This particular interpolation was chosen for the sake of simplicity, but it can be replaced by higher order methods. Figure \ref{fig:interpolate} illustrates the effect of the \textit{interpolate} function when converting domains. \begin{spec} updateInteg :: Integrator -> CT Double -> CT () diff --git a/doc/MastersThesis/Lhs/Introduction.lhs b/doc/MastersThesis/Lhs/Introduction.lhs index 0bcdb0a..85da3dc 100644 --- a/doc/MastersThesis/Lhs/Introduction.lhs +++ b/doc/MastersThesis/Lhs/Introduction.lhs @@ -15,7 +15,7 @@ Within software, the aforementioned issues --- the lack of time semantics and th The development of a \textit{model of computation} (MoC) to define and express models is the major hero towards this better set of abstractions, given that it provides clear, formal and well-defined semantics~\cite{LeeModeling} on how engineering artifacts should behave~\cite{Lee2016}. These MoCs determine how concurrency works in the model, choose which communication protocols will be used, define whether different components share the notion of time, as well as whether and how they share state~\cite{LeeModeling, LeeComponent}. Also, Sangiovanni and Lee~\cite{LeeSangiovanni} proposed a formalized denotational framework to allow understanding and comparison between mixtures of MoCs, thus solving the heterogeneity issue that raises naturally in many situations during design~\cite{LeeModeling, LeeComponent}. Moreover, their framework also describes how to compose different MoCs, along with addressing the absence of time in models, via what is defined as \textit{tagged systems}~\cite{Chupin2019, Perez2023, Rovers2011} --- a relationship between a \textit{tag}, generally used to order events, and an output value. -Ingo et al. went even further~\cite{Sander2017} by presenting a framework based on the idea of tagged systems, known as \textit{ForSyDe}. The tool's main goal is to push system design to a higher level of abstraction, by combining MoCs with the functional programming paradigm. The technique separates the design into two phases, specification and synthesis. The former stage, specification, focus on creating a high-level abstraction model, in which mathematical formalism is taken into account. The latter part, synthesis, is responsible for applying design transformations --- the model is adapted to ForSyDe's semantics --- and mapping this result onto a chosen architecture for later be implemented in a target programming language or hardware platform~\cite{Sander2017}. Afterward, Seyed-Hosein and Ingo~\cite{Seyed2020} created a co-simulation architecture for multiple models based on ForSyDe's methodology, addressing heterogeneity across languages and tools with different semantics. One example of such tools treated in the reference is Simulink~\footnote{Simulink \href{http://www.mathworks.com/products/simulink/}{\textcolor{blue}{documentation}}.}, the de facto model-based design tool that lacks a formal semantics basis~\cite{Seyed2020}. Simulink being the standard tool for modeling means that, despite all the effort into utilizing a formal approach to model-based design, this is still an open problem. +Ingo et al. went even further~\cite{Sander2017} by presenting a framework based on the idea of tagged systems, known as \textit{ForSyDe}. The tool's main goal is to push system design to a higher level of abstraction, by combining MoCs with the functional programming paradigm. The technique separates the design into two phases, specification and synthesis. The former stage, specification, focus on creating a high-level abstraction model, in which mathematical formalism is taken into account. The latter part, synthesis, is responsible for applying design transformations --- the model is adapted to ForSyDe's semantics --- and mapping this result onto a chosen architecture to be implemented later in a target programming language or hardware platform~\cite{Sander2017}. Afterward, Seyed-Hosein and Ingo~\cite{Seyed2020} created a co-simulation architecture for multiple models based on ForSyDe's methodology, addressing heterogeneity across languages and tools with different semantics. One example of such tools treated in the reference is Simulink~\footnote{Simulink \href{http://www.mathworks.com/products/simulink/}{\textcolor{blue}{documentation}}.}, the de facto model-based design tool that lacks a formal semantics basis~\cite{Seyed2020}. Simulink being the standard tool for modeling means that, despite all the effort into utilizing a formal approach to model-based design, this is still an open problem. \section{Contribution} \label{sec:intro} @@ -39,7 +39,7 @@ Furthermore, this implementation is based on \texttt{Aivika}~\footnote{\texttt{A \begin{figure}[ht!] \begin{minipage}{0.45\linewidth} \begin{purespec} - -- Original version of FACT + -- FACT lorenzModel = do integX <- createInteg 1.0 integY <- createInteg 1.0 @@ -58,7 +58,7 @@ Furthermore, this implementation is based on \texttt{Aivika}~\footnote{\texttt{A \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \begin{minipage}{0.45\linewidth} \begin{purespec} - -- Final version of FACT + -- FFACT lorenzModel = mdo x <- integ (sigma * (y - x)) 1.0 y <- integ (x * (rho - z) - y) 1.0 @@ -69,7 +69,7 @@ Furthermore, this implementation is based on \texttt{Aivika}~\footnote{\texttt{A return $ sequence [x, y, z] \end{purespec} \end{minipage} -\caption{The translation between the world of software and the mathematical description of differential equations are explicit in the final version of \texttt{FACT}.} +\caption{The translation between the world of software and the mathematical description of differential equations are more concise and explicit in \texttt{FFACT}.} \label{fig:introExample} \end{figure} @@ -100,7 +100,7 @@ When comparing models in FFACT to other implementations in other ecosystems and one using the HEDSL needs less knowledge about the host programming language, Haskell in our case, \textit{and} one can more easily bridge the gap between a mathematical description of the problem and its analogous written in FFACT, due to less syntatical burden and noise from a user's perpective. Figures~\ref{fig:lorenz-simulink}, ~\ref{fig:lorenz-matlab},~\ref{fig:lorenz-python},~\ref{fig:lorenz-mathematica}, and~\ref{fig:lorenz-yampa} show some comparisons -between the same Lorenz Attractor model in different tecnologies. It is worth noting that these examples only show \textit{the system's description}, i.e., the \textit{drivers} of the simulations +between the same Lorenz Attractor model in different tecnologies, thus allowing a contrast in notation's conciseness between them. Ideally, a system's description should contain the \textit{least} amount of notation noise and artifacts to his mathematical counterpart. It is worth noting that these examples only show \textit{the system's description}, i.e., the \textit{drivers} of the simulations are being omitted. \begin{figure}[ht!] diff --git a/doc/MastersThesis/tex/abstract.tex b/doc/MastersThesis/tex/abstract.tex index 8e6b588..d9d9f97 100644 --- a/doc/MastersThesis/tex/abstract.tex +++ b/doc/MastersThesis/tex/abstract.tex @@ -1,2 +1,2 @@ -Physical phenomena is difficult to properly model due to its continuous nature. Its paralellism and nuances were a challenge before the transistor, and even after the digital computer still is an unsolved issue. In the past, some formalism were brought with the General Purpose Analog Computer proposed by Shannon in the 1940s. Unfortunately, this formal foundation was lost in time, with \textit{ad-hoc} practices becoming mainstream to simulate continuous time. In this work, we propose a domain-specific language (DSL) -- FACT and its evolution FFACT -- written in Haskell that resembles GPAC's concepts. The main goal is to take advantage of high level abtractions, both from the areas of programming and mathematics, to execute systems of differential equations, which describe physical problems mathematically. We evaluate performance and domain problems and address them accordingly. Future improvements for the DSL are also explored and detailed. +Physical phenomena are difficult to properly model due to their continuous nature. Its paralellism and nuances were a challenge before the transistor, and even after the digital computer it still is an unsolved issue. In the past, some formalism were brought with the General Purpose Analog Computer proposed by Shannon in the 1940s. Unfortunately, this formal foundation was lost in time, with \textit{ad-hoc} practices becoming mainstream to simulate continuous time. In this work, we propose a domain-specific language (DSL) -- FACT and its evolution FFACT -- written in Haskell that resembles GPAC's concepts. The main goal is to take advantage of high level abtractions, both from the areas of programming and mathematics, to execute systems of differential equations, which describe physical problems mathematically. We evaluate performance and domain problems and address them accordingly. Future improvements for the DSL are also explored and detailed. diff --git a/doc/MastersThesis/tex/dedication.tex b/doc/MastersThesis/tex/dedication.tex index 4ffe879..44930e0 100644 --- a/doc/MastersThesis/tex/dedication.tex +++ b/doc/MastersThesis/tex/dedication.tex @@ -3,7 +3,7 @@ To my father Rodolfo Rocha, for sharing with me his wise and insightful perceptions about life. Your distinct perspectives bring me awareness about any subject that we end up -discussing. Not everyone can have the luxuary of having +discussing. Not everyone can have the luxury of having civil and calm conversations with whom they may fundamentally disagree. My dad gave me my first opportunities of this kind and I'm truly grateful to him for that. diff --git a/doc/MastersThesis/thesis.lof b/doc/MastersThesis/thesis.lof index 8d5bd3e..a02a200 100644 --- a/doc/MastersThesis/thesis.lof +++ b/doc/MastersThesis/thesis.lof @@ -2,7 +2,7 @@ \babel@toc {american}{}\relax \babel@toc {american}{}\relax \addvspace {10\p@ } -\contentsline {figure}{\numberline {1.1}{\ignorespaces The translation between the world of software and the mathematical description of differential equations are explicit in the final version of \texttt {FACT}.}}{4}{figure.caption.8}% +\contentsline {figure}{\numberline {1.1}{\ignorespaces The translation between the world of software and the mathematical description of differential equations are more concise and explicit in \texttt {FFACT}.}}{4}{figure.caption.8}% \contentsline {figure}{\numberline {1.2}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation~\cite {Simulink}.}}{6}{figure.caption.9}% \contentsline {figure}{\numberline {1.3}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.}}{7}{figure.caption.10}% \contentsline {figure}{\numberline {1.4}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Python implementation.}}{7}{figure.caption.11}% @@ -17,7 +17,7 @@ \contentsline {figure}{\numberline {2.6}{\ignorespaces Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textit {record} alongside with the constructor, where the labels for each member inside it are explicit.}}{13}{figure.caption.18}% \contentsline {figure}{\numberline {2.7}{\ignorespaces Depending on the application, different representations of the same structure need to used due to the domain of interest and/or memory constraints.}}{14}{figure.caption.19}% \contentsline {figure}{\numberline {2.8}{\ignorespaces The minimum requirement for the \texttt {Ord} typeclass is the $<=$ operator, meaning that the functions $<$, $<=$, $>$, $>=$, \texttt {max} and \texttt {min} are now unlocked for the type \texttt {ClockTime} after the implementation. Typeclasses can be viewed as a third dimension in a type.}}{14}{figure.caption.20}% -\contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above is common.}}{15}{figure.caption.21}% +\contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above are common.}}{15}{figure.caption.21}% \contentsline {figure}{\numberline {2.10}{\ignorespaces The initial value is used as a starting point for the procedure. The algorithm continues until the time of interest is reached in the unknown function. Due to its large time step, the final answer is really far-off from the expected result.}}{17}{figure.caption.22}% \contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textit {function}, in which providing a floating point value as time returns another value as outcome.}}{17}{figure.caption.23}% \contentsline {figure}{\numberline {2.12}{\ignorespaces The \texttt {Parameters} type represents a given moment in time, carrying over all the necessary information to execute a solver step until the time limit is reached. Some useful typeclasses are being derived to these types, given that Haskell is capable of inferring the implementation of typeclasses in simple cases.}}{18}{figure.caption.24}% @@ -29,14 +29,14 @@ \contentsline {figure}{\numberline {3.3}{\ignorespaces The $>>=$ operator used in the implementation is the \textit {bind} from the \texttt {IO} shell. This indicates that when dealing with monads within monads, it is frequent to use the implementation of the internal members.}}{23}{figure.caption.29}% \contentsline {figure}{\numberline {3.4}{\ignorespaces The typeclass \texttt {MonadIO} transforms a given value wrapped in \texttt {IO} into a different monad. In this case, the parameter \texttt {m} of the function is the output of the \texttt {CT} type.}}{23}{figure.caption.30}% \contentsline {figure}{\numberline {3.5}{\ignorespaces The ability of lifting numerical values to the \texttt {CT} type resembles three FF-GPAC analog circuits: \texttt {Constant}, \texttt {Adder} and \texttt {Multiplier}.}}{24}{figure.caption.31}% -\contentsline {figure}{\numberline {3.6}{\ignorespaces State Machines are a common abstraction in computer science due to its easy mapping between function calls and states. Memory regions and peripherals are embedded with the idea of a state, not only pure functions. Further, side effects can even act as the trigger to move from one state to another, meaning that executing a simple function can do more than return a value. Its internal guts can significantly modify the state machine.}}{25}{figure.caption.32}% +\contentsline {figure}{\numberline {3.6}{\ignorespaces Example of a State Machine}}{25}{figure.caption.32}% \contentsline {figure}{\numberline {3.7}{\ignorespaces The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt {CT} and \texttt {Integrator} types match the four basic units.}}{30}{figure.caption.33}% \addvspace {10\p@ } \contentsline {figure}{\numberline {4.1}{\ignorespaces The integrator functions are essential to create and interconnect combinational and feedback-dependent circuits.}}{34}{figure.caption.34}% \contentsline {figure}{\numberline {4.2}{\ignorespaces The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description.}}{34}{figure.caption.35}% \contentsline {figure}{\numberline {4.3}{\ignorespaces Because the list implements the \texttt {Traversable} typeclass, it allows this type to use the \textit {traverse} and \textit {sequence} functions, in which both are related to changing the internal behaviour of the nested structures.}}{35}{figure.caption.36}% \contentsline {figure}{\numberline {4.4}{\ignorespaces A \textit {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{35}{figure.caption.37}% -\contentsline {figure}{\numberline {4.5}{\ignorespaces When building a model for simulation, the above pipeline is always used, from both points of view. The operations with meaning, i.e., the ones in the \texttt {Semantics} pipeline, are mapped to executable operations in the \texttt {Operational} pipeline, and vice-versa.}}{36}{figure.caption.38}% +\contentsline {figure}{\numberline {4.5}{\ignorespaces Execution pipeline of a model.}}{36}{figure.caption.38}% \contentsline {figure}{\numberline {4.6}{\ignorespaces Using only FF-GPAC's basic units and their composition rules, it's possible to model the Lorenz Attractor example.}}{39}{figure.caption.39}% \contentsline {figure}{\numberline {4.7}{\ignorespaces After \textit {createInteg}, this record is the final image of the integrator. The function \textit {initialize} gives us protecting against wrong records of the type \texttt {Parameters}, assuring it begins from the first iteration, i.e., $t_0$.}}{40}{figure.caption.40}% \contentsline {figure}{\numberline {4.8}{\ignorespaces After \textit {readInteg}, the final floating point values is obtained by reading from memory a computation and passing to it the received parameters record. The result of this application, $v$, is the returned value.}}{41}{figure.caption.41}% @@ -56,9 +56,10 @@ \contentsline {figure}{\numberline {6.5}{\ignorespaces Caching changes the direction of walking through the iteration axis. It also removes an entire pass through the previous iterations.}}{61}{figure.caption.55}% \contentsline {figure}{\numberline {6.6}{\ignorespaces By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation.}}{65}{figure.caption.58}% \addvspace {10\p@ } -\contentsline {figure}{\numberline {7.1}{\ignorespaces Resettable counter in hardware, inspired by Levent's works~\cite {levent2000, levent2002}.}}{70}{figure.caption.59}% -\contentsline {figure}{\numberline {7.2}{\ignorespaces Diagram of \texttt {createInteg} primitive for intuition..}}{72}{figure.caption.60}% -\contentsline {figure}{\numberline {7.3}{\ignorespaces Results of FFACT are similar to the final version of FACT..}}{75}{figure.caption.61}% +\contentsline {figure}{\numberline {7.1}{\ignorespaces Execution pipeline of a model.}}{67}{figure.caption.59}% +\contentsline {figure}{\numberline {7.2}{\ignorespaces Resettable counter in hardware, inspired by Levent's works~\cite {levent2000, levent2002}.}}{70}{figure.caption.60}% +\contentsline {figure}{\numberline {7.3}{\ignorespaces Diagram of \texttt {createInteg} primitive for intuition.}}{73}{figure.caption.61}% +\contentsline {figure}{\numberline {7.4}{\ignorespaces Results of FFACT are similar to the final version of FACT..}}{76}{figure.caption.62}% \addvspace {10\p@ } \addvspace {10\p@ } \babel@toc {american}{}\relax diff --git a/doc/MastersThesis/thesis.toc b/doc/MastersThesis/thesis.toc index 75eaf58..12603ed 100644 --- a/doc/MastersThesis/thesis.toc +++ b/doc/MastersThesis/thesis.toc @@ -35,18 +35,18 @@ \contentsline {section}{\numberline {6.6}Results with Caching}{63}{section.6.6}% \contentsline {chapter}{\numberline {7}Fixing Recursion}{66}{chapter.7}% \contentsline {section}{\numberline {7.1}Integrator's Noise}{66}{section.7.1}% -\contentsline {section}{\numberline {7.2}The Fixed-Point Combinator}{67}{section.7.2}% -\contentsline {section}{\numberline {7.3}Value Recursion with Fixed-Points}{69}{section.7.3}% +\contentsline {section}{\numberline {7.2}The Fixed-Point Combinator}{68}{section.7.2}% +\contentsline {section}{\numberline {7.3}Value Recursion with Fixed-Points}{70}{section.7.3}% \contentsline {section}{\numberline {7.4}Tweak IV: Fixing FACT}{72}{section.7.4}% -\contentsline {chapter}{\numberline {8}Conclusion}{76}{chapter.8}% -\contentsline {section}{\numberline {8.1}Final Thoughts}{76}{section.8.1}% -\contentsline {section}{\numberline {8.2}Future Work}{77}{section.8.2}% -\contentsline {subsection}{\numberline {8.2.1}Formalism}{77}{subsection.8.2.1}% -\contentsline {subsection}{\numberline {8.2.2}Extensions}{77}{subsection.8.2.2}% -\contentsline {subsection}{\numberline {8.2.3}Refactoring}{78}{subsection.8.2.3}% -\contentsline {chapter}{\numberline {9}Appendix}{79}{chapter.9}% -\contentsline {section}{\numberline {9.1}Literate Programming}{79}{section.9.1}% -\contentsline {chapter}{References}{81}{section*.62}% +\contentsline {chapter}{\numberline {8}Conclusion}{77}{chapter.8}% +\contentsline {section}{\numberline {8.1}Final Thoughts}{77}{section.8.1}% +\contentsline {section}{\numberline {8.2}Future Work}{78}{section.8.2}% +\contentsline {subsection}{\numberline {8.2.1}Formalism}{78}{subsection.8.2.1}% +\contentsline {subsection}{\numberline {8.2.2}Extensions}{79}{subsection.8.2.2}% +\contentsline {subsection}{\numberline {8.2.3}Refactoring}{79}{subsection.8.2.3}% +\contentsline {chapter}{\numberline {9}Appendix}{81}{chapter.9}% +\contentsline {section}{\numberline {9.1}Literate Programming}{81}{section.9.1}% +\contentsline {chapter}{References}{83}{section*.63}% \babel@toc {american}{}\relax \babel@toc {american}{}\relax \babel@toc {american}{}\relax From af8993581f42afe9a46bfdb73e50951c0bee1db0 Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Mon, 7 Apr 2025 00:40:21 -0300 Subject: [PATCH 09/10] Move examples and comparisons to Chapter 7 --- doc/MastersThesis/Lhs/Fixing.lhs | 144 ++++++++++++++++++++++++ doc/MastersThesis/Lhs/Introduction.lhs | 148 +------------------------ doc/MastersThesis/thesis.lof | 101 +++++++++-------- doc/MastersThesis/thesis.toc | 83 +++++++------- 4 files changed, 238 insertions(+), 238 deletions(-) diff --git a/doc/MastersThesis/Lhs/Fixing.lhs b/doc/MastersThesis/Lhs/Fixing.lhs index f4ac47f..d866ef8 100644 --- a/doc/MastersThesis/Lhs/Fixing.lhs +++ b/doc/MastersThesis/Lhs/Fixing.lhs @@ -340,6 +340,150 @@ for a gain in conciseness. Figure~\ref{fig:fixed-graph} shows the new results. \figuraBib{Graph3}{Results of FFACT are similar to the final version of FACT.}{}{fig:fixed-graph}{width=.97\textwidth}% +\newpage + +\section{Examples and Comparisons} +\label{sec:examples} + +In order to assess how \textit{concise} model can be in FFACT, in comparison with the mathematical descriptions of the models, +we present comparisons between this dissertation's proposed implementation and the same example in SimulinkSimulink~\footnote{Simulink \href{http://www.mathworks.com/products/simulink/}{\textcolor{blue}{documentation}}.}, Matlab~\footnote{Matlab \href{https://www.mathworks.com/products/matlab.html}{\textcolor{blue}{documentation}}.}, Mathematica~\footnote{Mathematica \href{https://www.wolfram.com/mathematica/}{\textcolor{blue}{documentation}}.}, and \texttt{Yampa}~\footnote{Yampa \href{https://hackage.haskell.org/package/Yampa}{\textcolor{blue}{hackage documentation}}.}. It is worth noting that the last one, \texttt{Yampa}, is also implemented in Haskell as a HEDSL. In each pair of comparisons both conciseness and differences will be considered when implementing the Lorenz Attractor model. Ideally, a system's description should contain the \textit{least} amount of notation noise and artifacts to his mathematical counterpart. It is worth noting that these examples only show \textit{the system's description}, i.e., the \textit{drivers} of the simulations +are being omitted when not necessary to describe the system. + +Figure~\ref{fig:lorenz-simulink} depicts a side-by-side comparison between FFACT and Simulink. The Haskell HEDSL specifies a model in text format, whilst Simulink +is a visual tool --- you draw a diagram that represents the system, including the feedback loop of integrators, something exposed in Simulink. +A visual tool can be useful for educational purposes, and a pictorial version of FFACT could be made by an external tool that from a diagram +it compiles down to the correspondent Haskell code of the HEDSL. + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} + \begin{minipage}{0.5\linewidth} + \centering + \includegraphics[width=0.95\linewidth]{MastersThesis/img/lorenzSimulink} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation~\cite{Simulink}.} +\label{fig:lorenz-simulink} +\end{figure} + +Figure ~\ref{fig:lorenz-matlab} shows a comparison between FFACT and Matlab. The main differetiating factor between the two +implementations is in Matlab the system, constructed via a separate lambda function (named \texttt{f} in the example), has the initial +conditions of the system at \(t_0\) only added when calling the \textit{driver} of the simulation --- the call of the \texttt{ode45} +function. In FFACT, the interval for the simulation and which numerical method will be used are completely separate of the system's +description; a \textit{model}. Furthermore, Matlab's description of the system introduces some notation noise via the use of \texttt{vars}, exposing +implementation details to the system's designer. + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \begin{minipage}{0.54\linewidth} + \begin{matlab} + sigma = 10; + beta = 8/3; + rho = 28; + f = @(t,vars) + [sigma*(vars(2) - vars(1)); + vars(1)*(rho - vars(3)) - vars(2); + vars(1)*vars(2) - beta*vars(3)]; + [t,vars] = ode45(f,[0 50],[1 1 1]); + \end{matlab} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.} +\label{fig:lorenz-matlab} +\end{figure} + + +The next comparison is between Mathematica and FFACT, as depicted in Figure~\ref{fig:lorenz-mathematica}. +Differently than Matlab, Mathematica uses the state variables' names when describing the system. However, just like +with Matlab, the initial conditions of the system are only provided when calling the driver of the simulation. Moreover, +there's significant noise in Mathematica's version in comparison to FFACT's version. + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \begin{minipage}{0.54\linewidth} + \begin{mathematica} + lorenzModel = NonlinearStateSpaceModel[ + {{sigma (y - x), + x (rho - z) - y, + x y - beta z}, {}}, + {x, y, z}, + {sigma, rho, beta}]; + soln[t_] = StateResponse[ + {lorenzModel, {1, 1, 1}}, + {10, 28, 8/3}, + {t, 0, 50}]; + \end{mathematica} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Mathematica implementation.} +\label{fig:lorenz-mathematica} +\end{figure} + +Finally, Figure~\ref{fig:lorenz-yampa} contrasts FFACT with \texttt{Yampa}, another HEDSL for time modeling and simulation. +Although \texttt{Yampa} is more powerful and expressive than FFACT --- \texttt{Yampa} can accomodate hybrid simulations with +both \textit{discrete} and \textit{continuous} time modeling --- its approach introduces some noise in the Lorenz Attractor model. +The introduction of \texttt{proc}, \texttt{pre}, \texttt{>>>}, \texttt{imIntegral}, and \texttt{-<} all introduce extra burden on the +system's designer to describe the system. After learning about \texttt{proc-notation}~\cite{Yampa} and Arrows~\footnote{Arrows \href{https://hackage.haskell.org/package/base-4.18.1.0/docs/Control-Arrow.html}{\textcolor{blue}{hackage documentation}}.}, one can describe more complex systems in Yampa. + +\begin{figure}[ht!] + \begin{minipage}{0.45\linewidth} + \begin{purespec} + lorenzModel = mdo + x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] + \end{purespec} + \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \hspace{-2.4cm} + \begin{minipage}{0.64\linewidth} + \begin{purespec} + lorenzModel = proc () -> do + rec x <- pre >>> imIntegral 1.0 -< sigma*(y - x) + y <- pre >>> imIntegral 1.0 -< x*(rho - z) - y + z <- pre >>> imIntegral 1.0 -< (x*y) - (beta*z) + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + returnA -< (x, y, z) + \end{purespec} + \end{minipage} +\caption{Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation.} +\label{fig:lorenz-yampa} +\end{figure} + The function \texttt{integ} alone in FFACT ties the recursion knot previously done via the \texttt{computation} and \texttt{cache} fields from the original integrator data type in FACT. Hence, a lot of implementation noise of the DSL is kept away from the user --- the designer of the system --- when using FFACT. With this Chapter, we addressed the third and final concerned explained in Chapter 1, \textit{Introduction}. The final Chapter, \textit{Conclusion}, will conclude this work, pointing out limitations of the project, as well as future improvements and final thoughts about the project. diff --git a/doc/MastersThesis/Lhs/Introduction.lhs b/doc/MastersThesis/Lhs/Introduction.lhs index 85da3dc..985afbb 100644 --- a/doc/MastersThesis/Lhs/Introduction.lhs +++ b/doc/MastersThesis/Lhs/Introduction.lhs @@ -98,152 +98,8 @@ an overloaded syntax. Once the leak is solved, it is expected that the \textit{t When comparing models in FFACT to other implementations in other ecosystems and programming languages, FFACT's conciseness brings more familiarity, i.e., one using the HEDSL needs less knowledge about the host programming language, Haskell in our case, \textit{and} one can more easily bridge the gap between a mathematical -description of the problem and its analogous written in FFACT, due to less syntatical burden and noise from a user's perpective. Figures~\ref{fig:lorenz-simulink}, -~\ref{fig:lorenz-matlab},~\ref{fig:lorenz-python},~\ref{fig:lorenz-mathematica}, and~\ref{fig:lorenz-yampa} show some comparisons -between the same Lorenz Attractor model in different tecnologies, thus allowing a contrast in notation's conciseness between them. Ideally, a system's description should contain the \textit{least} amount of notation noise and artifacts to his mathematical counterpart. It is worth noting that these examples only show \textit{the system's description}, i.e., the \textit{drivers} of the simulations -are being omitted. - -\begin{figure}[ht!] - \begin{minipage}{0.45\linewidth} - \begin{purespec} - lorenzModel = mdo - x <- integ (sigma * (y - x)) 1.0 - y <- integ (x * (rho - z) - y) 1.0 - z <- integ (x * y - beta * z) 1.0 - let sigma = 10.0 - rho = 28.0 - beta = 8.0 / 3.0 - return $ sequence [x, y, z] - \end{purespec} - \end{minipage} - \begin{minipage}{0.5\linewidth} - \centering - \includegraphics[width=0.95\linewidth]{MastersThesis/img/lorenzSimulink} - \end{minipage} -\caption{Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation~\cite{Simulink}.} -\label{fig:lorenz-simulink} -\end{figure} - -\begin{figure}[ht!] - \begin{minipage}{0.45\linewidth} - \begin{purespec} - lorenzModel = mdo - x <- integ (sigma * (y - x)) 1.0 - y <- integ (x * (rho - z) - y) 1.0 - z <- integ (x * y - beta * z) 1.0 - let sigma = 10.0 - rho = 28.0 - beta = 8.0 / 3.0 - return $ sequence [x, y, z] - \end{purespec} - \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - \begin{minipage}{0.54\linewidth} - \begin{matlab} - sigma = 10; - beta = 8/3; - rho = 28; - f = @(t,vars) - [sigma*(vars(2) - vars(1)); - vars(1)*(rho - vars(3)) - vars(2); - vars(1)*vars(2) - beta*vars(3)]; - [t,vars] = ode45(f,[0 100],[1 1 1]; - \end{matlab} - \end{minipage} -\caption{Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.} -\label{fig:lorenz-matlab} -\end{figure} - -\begin{figure}[ht!] - \begin{minipage}{0.45\linewidth} - \begin{purespec} - lorenzModel = mdo - x <- integ (sigma * (y - x)) 1.0 - y <- integ (x * (rho - z) - y) 1.0 - z <- integ (x * y - beta * z) 1.0 - let sigma = 10.0 - rho = 28.0 - beta = 8.0 / 3.0 - return $ sequence [x, y, z] - \end{purespec} - \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - \begin{minipage}{0.54\linewidth} - \begin{python} - def lorenzModel(x, y, z): - sigma = 10 - rho = 28 - beta = 8/3 - x_dot = sigma*(y - x) - y_dot = rho*x - y - x*z - z_dot = x*y - beta*z - return np.array([x_dot, y_dot, z_dot]) - \end{python} - \end{minipage} -\caption{Comparison of the Lorenz Attractor Model between FFACT and a Python implementation.} -\label{fig:lorenz-python} -\end{figure} - -\begin{figure}[ht!] - \begin{minipage}{0.45\linewidth} - \begin{purespec} - lorenzModel = mdo - x <- integ (sigma * (y - x)) 1.0 - y <- integ (x * (rho - z) - y) 1.0 - z <- integ (x * y - beta * z) 1.0 - let sigma = 10.0 - rho = 28.0 - beta = 8.0 / 3.0 - return $ sequence [x, y, z] - \end{purespec} - \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - \begin{minipage}{0.54\linewidth} - \begin{mathematica} - lorenzModel = NonlinearStateSpaceModel[ - {{sigma (y - x), - x (rho - z) - y, - x y - beta z}, {}}, - {x, y, z}, - {sigma, rho, beta}]; - soln[t_] = StateResponse[ - {lorenzModel, {10, 10, 10}}, - {10, 28, 8/3}, - {t, 0, 50}]; - \end{mathematica} - \end{minipage} -\caption{Comparison of the Lorenz Attractor Model between FFACT and a Mathematica implementation.} -\label{fig:lorenz-mathematica} -\end{figure} - -\begin{figure}[ht!] - \begin{minipage}{0.45\linewidth} - \begin{purespec} - lorenzModel = mdo - x <- integ (sigma * (y - x)) 1.0 - y <- integ (x * (rho - z) - y) 1.0 - z <- integ (x * y - beta * z) 1.0 - let sigma = 10.0 - rho = 28.0 - beta = 8.0 / 3.0 - return $ sequence [x, y, z] - \end{purespec} - \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - \hspace{-2.4cm} - \begin{minipage}{0.64\linewidth} - \begin{purespec} - lorenzModel = proc () -> do - rec x <- pre >>> imIntegral 1.0 -< sigma*(y - x) - y <- pre >>> imIntegral 1.0 -< x*(rho - z) - y - z <- pre >>> imIntegral 1.0 -< (x*y) - (beta*z) - let sigma = 10.0 - rho = 28.0 - beta = 8.0 / 3.0 - returnA -< (x, y, z) - \end{purespec} - \end{minipage} -\caption{Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation~\cite{Yampa} (also in Haskell).} -\label{fig:lorenz-yampa} -\end{figure} - -\newpage +description of the problem and its analogous written in FFACT, due to less syntatical burden and noise from a user's perpective. Examples and comparisons will be +depicted in Chapter 7, \textit{Fixing Recursion}, Section~\ref{sec:examples}. \section{Outline} diff --git a/doc/MastersThesis/thesis.lof b/doc/MastersThesis/thesis.lof index a02a200..9d455e6 100644 --- a/doc/MastersThesis/thesis.lof +++ b/doc/MastersThesis/thesis.lof @@ -3,63 +3,62 @@ \babel@toc {american}{}\relax \addvspace {10\p@ } \contentsline {figure}{\numberline {1.1}{\ignorespaces The translation between the world of software and the mathematical description of differential equations are more concise and explicit in \texttt {FFACT}.}}{4}{figure.caption.8}% -\contentsline {figure}{\numberline {1.2}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation~\cite {Simulink}.}}{6}{figure.caption.9}% -\contentsline {figure}{\numberline {1.3}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.}}{7}{figure.caption.10}% -\contentsline {figure}{\numberline {1.4}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Python implementation.}}{7}{figure.caption.11}% -\contentsline {figure}{\numberline {1.5}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Mathematica implementation.}}{7}{figure.caption.12}% -\contentsline {figure}{\numberline {1.6}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation~\cite {Yampa} (also in Haskell).}}{7}{figure.caption.13}% \addvspace {10\p@ } -\contentsline {figure}{\numberline {2.1}{\ignorespaces The combination of these four basic units compose any GPAC circuit (taken from~\cite {Edil2018} with permission).}}{10}{figure.caption.14}% -\contentsline {figure}{\numberline {2.2}{\ignorespaces Polynomial circuits resembles combinational circuits, in which the circuit respond instantly to changes on its inputs (taken from~\cite {Edil2018} with permission).}}{11}{figure.caption.15}% -\contentsline {figure}{\numberline {2.3}{\ignorespaces Types are not just labels; they enhance the manipulated data with new information. Their difference in shape can work as the interface for the data.}}{12}{figure.caption.16}% -\contentsline {figure}{\numberline {2.4}{\ignorespaces Functions' signatures are contracts; they purespecify which shape the input information has as well as which shape the output information will have.}}{12}{figure.caption.16}% -\contentsline {figure}{\numberline {2.5}{\ignorespaces Sum types can be understood in terms of sets, in which the members of the set are available candidates for the outer shell type. Parity and possible values in digital states are examples.}}{13}{figure.caption.17}% -\contentsline {figure}{\numberline {2.6}{\ignorespaces Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textit {record} alongside with the constructor, where the labels for each member inside it are explicit.}}{13}{figure.caption.18}% -\contentsline {figure}{\numberline {2.7}{\ignorespaces Depending on the application, different representations of the same structure need to used due to the domain of interest and/or memory constraints.}}{14}{figure.caption.19}% -\contentsline {figure}{\numberline {2.8}{\ignorespaces The minimum requirement for the \texttt {Ord} typeclass is the $<=$ operator, meaning that the functions $<$, $<=$, $>$, $>=$, \texttt {max} and \texttt {min} are now unlocked for the type \texttt {ClockTime} after the implementation. Typeclasses can be viewed as a third dimension in a type.}}{14}{figure.caption.20}% -\contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above are common.}}{15}{figure.caption.21}% -\contentsline {figure}{\numberline {2.10}{\ignorespaces The initial value is used as a starting point for the procedure. The algorithm continues until the time of interest is reached in the unknown function. Due to its large time step, the final answer is really far-off from the expected result.}}{17}{figure.caption.22}% -\contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textit {function}, in which providing a floating point value as time returns another value as outcome.}}{17}{figure.caption.23}% -\contentsline {figure}{\numberline {2.12}{\ignorespaces The \texttt {Parameters} type represents a given moment in time, carrying over all the necessary information to execute a solver step until the time limit is reached. Some useful typeclasses are being derived to these types, given that Haskell is capable of inferring the implementation of typeclasses in simple cases.}}{18}{figure.caption.24}% -\contentsline {figure}{\numberline {2.13}{\ignorespaces The \texttt {CT} type is a function of from time related information to an arbitrary potentially effectful outcome value.}}{19}{figure.caption.25}% -\contentsline {figure}{\numberline {2.14}{\ignorespaces The \texttt {CT} type can leverage monad transformers in Haskell via \texttt {Reader} in combination with \texttt {IO}.}}{19}{figure.caption.26}% +\contentsline {figure}{\numberline {2.1}{\ignorespaces The combination of these four basic units compose any GPAC circuit (taken from~\cite {Edil2018} with permission).}}{9}{figure.caption.9}% +\contentsline {figure}{\numberline {2.2}{\ignorespaces Polynomial circuits resembles combinational circuits, in which the circuit respond instantly to changes on its inputs (taken from~\cite {Edil2018} with permission).}}{10}{figure.caption.10}% +\contentsline {figure}{\numberline {2.3}{\ignorespaces Types are not just labels; they enhance the manipulated data with new information. Their difference in shape can work as the interface for the data.}}{11}{figure.caption.11}% +\contentsline {figure}{\numberline {2.4}{\ignorespaces Functions' signatures are contracts; they purespecify which shape the input information has as well as which shape the output information will have.}}{11}{figure.caption.11}% +\contentsline {figure}{\numberline {2.5}{\ignorespaces Sum types can be understood in terms of sets, in which the members of the set are available candidates for the outer shell type. Parity and possible values in digital states are examples.}}{12}{figure.caption.12}% +\contentsline {figure}{\numberline {2.6}{\ignorespaces Product types are a combination of different sets, where you pick a representative from each one. Digital clocks' time and objects' coordinates in space are common use cases. In Haskell, a product type can be defined using a \textit {record} alongside with the constructor, where the labels for each member inside it are explicit.}}{12}{figure.caption.13}% +\contentsline {figure}{\numberline {2.7}{\ignorespaces Depending on the application, different representations of the same structure need to used due to the domain of interest and/or memory constraints.}}{13}{figure.caption.14}% +\contentsline {figure}{\numberline {2.8}{\ignorespaces The minimum requirement for the \texttt {Ord} typeclass is the $<=$ operator, meaning that the functions $<$, $<=$, $>$, $>=$, \texttt {max} and \texttt {min} are now unlocked for the type \texttt {ClockTime} after the implementation. Typeclasses can be viewed as a third dimension in a type.}}{13}{figure.caption.15}% +\contentsline {figure}{\numberline {2.9}{\ignorespaces Replacements for the validation function within a pipeline like the above are common.}}{14}{figure.caption.16}% +\contentsline {figure}{\numberline {2.10}{\ignorespaces The initial value is used as a starting point for the procedure. The algorithm continues until the time of interest is reached in the unknown function. Due to its large time step, the final answer is really far-off from the expected result.}}{16}{figure.caption.17}% +\contentsline {figure}{\numberline {2.11}{\ignorespaces In Haskell, the \texttt {type} keyword works for alias. The first draft of the \texttt {CT} type is a \textit {function}, in which providing a floating point value as time returns another value as outcome.}}{16}{figure.caption.18}% +\contentsline {figure}{\numberline {2.12}{\ignorespaces The \texttt {Parameters} type represents a given moment in time, carrying over all the necessary information to execute a solver step until the time limit is reached. Some useful typeclasses are being derived to these types, given that Haskell is capable of inferring the implementation of typeclasses in simple cases.}}{17}{figure.caption.19}% +\contentsline {figure}{\numberline {2.13}{\ignorespaces The \texttt {CT} type is a function of from time related information to an arbitrary potentially effectful outcome value.}}{18}{figure.caption.20}% +\contentsline {figure}{\numberline {2.14}{\ignorespaces The \texttt {CT} type can leverage monad transformers in Haskell via \texttt {Reader} in combination with \texttt {IO}.}}{18}{figure.caption.21}% \addvspace {10\p@ } -\contentsline {figure}{\numberline {3.1}{\ignorespaces Given a parametric record \texttt {ps} and a dynamic value \texttt {da}, the \textit {fmap} functor of the \texttt {CT} type applies the former to the latter. Because the final result is wrapped inside the \texttt {IO} shell, a second \textit {fmap} is necessary.}}{21}{figure.caption.27}% -\contentsline {figure}{\numberline {3.2}{\ignorespaces With the \texttt {Applicative} typeclass, it is possible to cope with functions inside the \texttt {CT} type. Again, the \textit {fmap} from \texttt {IO} is being used in the implementation.}}{22}{figure.caption.28}% -\contentsline {figure}{\numberline {3.3}{\ignorespaces The $>>=$ operator used in the implementation is the \textit {bind} from the \texttt {IO} shell. This indicates that when dealing with monads within monads, it is frequent to use the implementation of the internal members.}}{23}{figure.caption.29}% -\contentsline {figure}{\numberline {3.4}{\ignorespaces The typeclass \texttt {MonadIO} transforms a given value wrapped in \texttt {IO} into a different monad. In this case, the parameter \texttt {m} of the function is the output of the \texttt {CT} type.}}{23}{figure.caption.30}% -\contentsline {figure}{\numberline {3.5}{\ignorespaces The ability of lifting numerical values to the \texttt {CT} type resembles three FF-GPAC analog circuits: \texttt {Constant}, \texttt {Adder} and \texttt {Multiplier}.}}{24}{figure.caption.31}% -\contentsline {figure}{\numberline {3.6}{\ignorespaces Example of a State Machine}}{25}{figure.caption.32}% -\contentsline {figure}{\numberline {3.7}{\ignorespaces The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt {CT} and \texttt {Integrator} types match the four basic units.}}{30}{figure.caption.33}% +\contentsline {figure}{\numberline {3.1}{\ignorespaces Given a parametric record \texttt {ps} and a dynamic value \texttt {da}, the \textit {fmap} functor of the \texttt {CT} type applies the former to the latter. Because the final result is wrapped inside the \texttt {IO} shell, a second \textit {fmap} is necessary.}}{20}{figure.caption.22}% +\contentsline {figure}{\numberline {3.2}{\ignorespaces With the \texttt {Applicative} typeclass, it is possible to cope with functions inside the \texttt {CT} type. Again, the \textit {fmap} from \texttt {IO} is being used in the implementation.}}{21}{figure.caption.23}% +\contentsline {figure}{\numberline {3.3}{\ignorespaces The $>>=$ operator used in the implementation is the \textit {bind} from the \texttt {IO} shell. This indicates that when dealing with monads within monads, it is frequent to use the implementation of the internal members.}}{22}{figure.caption.24}% +\contentsline {figure}{\numberline {3.4}{\ignorespaces The typeclass \texttt {MonadIO} transforms a given value wrapped in \texttt {IO} into a different monad. In this case, the parameter \texttt {m} of the function is the output of the \texttt {CT} type.}}{22}{figure.caption.25}% +\contentsline {figure}{\numberline {3.5}{\ignorespaces The ability of lifting numerical values to the \texttt {CT} type resembles three FF-GPAC analog circuits: \texttt {Constant}, \texttt {Adder} and \texttt {Multiplier}.}}{23}{figure.caption.26}% +\contentsline {figure}{\numberline {3.6}{\ignorespaces Example of a State Machine}}{24}{figure.caption.27}% +\contentsline {figure}{\numberline {3.7}{\ignorespaces The integrator functions attend the rules of composition of FF-GPAC, whilst the \texttt {CT} and \texttt {Integrator} types match the four basic units.}}{29}{figure.caption.28}% \addvspace {10\p@ } -\contentsline {figure}{\numberline {4.1}{\ignorespaces The integrator functions are essential to create and interconnect combinational and feedback-dependent circuits.}}{34}{figure.caption.34}% -\contentsline {figure}{\numberline {4.2}{\ignorespaces The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description.}}{34}{figure.caption.35}% -\contentsline {figure}{\numberline {4.3}{\ignorespaces Because the list implements the \texttt {Traversable} typeclass, it allows this type to use the \textit {traverse} and \textit {sequence} functions, in which both are related to changing the internal behaviour of the nested structures.}}{35}{figure.caption.36}% -\contentsline {figure}{\numberline {4.4}{\ignorespaces A \textit {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{35}{figure.caption.37}% -\contentsline {figure}{\numberline {4.5}{\ignorespaces Execution pipeline of a model.}}{36}{figure.caption.38}% -\contentsline {figure}{\numberline {4.6}{\ignorespaces Using only FF-GPAC's basic units and their composition rules, it's possible to model the Lorenz Attractor example.}}{39}{figure.caption.39}% -\contentsline {figure}{\numberline {4.7}{\ignorespaces After \textit {createInteg}, this record is the final image of the integrator. The function \textit {initialize} gives us protecting against wrong records of the type \texttt {Parameters}, assuring it begins from the first iteration, i.e., $t_0$.}}{40}{figure.caption.40}% -\contentsline {figure}{\numberline {4.8}{\ignorespaces After \textit {readInteg}, the final floating point values is obtained by reading from memory a computation and passing to it the received parameters record. The result of this application, $v$, is the returned value.}}{41}{figure.caption.41}% -\contentsline {figure}{\numberline {4.9}{\ignorespaces The \textit {updateInteg} function only does side effects, meaning that only affects memory. The internal variable \texttt {c} is a pointer to the computation \textit {itself}, i.e., the computation being created references this exact procedure.}}{41}{figure.caption.42}% -\contentsline {figure}{\numberline {4.10}{\ignorespaces After setting up the environment, this is the final depiction of an independent variable. The reader $x$ reads the values computed by the procedure stored in memory, a second-order Runge-Kutta method in this case.}}{42}{figure.caption.43}% -\contentsline {figure}{\numberline {4.11}{\ignorespaces The Lorenz's Attractor example has a very famous butterfly shape from certain angles and constant values in the graph generated by the solution of the differential equations..}}{43}{figure.caption.44}% +\contentsline {figure}{\numberline {4.1}{\ignorespaces The integrator functions are essential to create and interconnect combinational and feedback-dependent circuits.}}{33}{figure.caption.29}% +\contentsline {figure}{\numberline {4.2}{\ignorespaces The developed DSL translates a system described by differential equations to an executable model that resembles FF-GPAC's description.}}{33}{figure.caption.30}% +\contentsline {figure}{\numberline {4.3}{\ignorespaces Because the list implements the \texttt {Traversable} typeclass, it allows this type to use the \textit {traverse} and \textit {sequence} functions, in which both are related to changing the internal behaviour of the nested structures.}}{34}{figure.caption.31}% +\contentsline {figure}{\numberline {4.4}{\ignorespaces A \textit {state vector} comprises multiple state variables and requires the use of the \textit {sequence} function to sync time across all variables.}}{34}{figure.caption.32}% +\contentsline {figure}{\numberline {4.5}{\ignorespaces Execution pipeline of a model.}}{35}{figure.caption.33}% +\contentsline {figure}{\numberline {4.6}{\ignorespaces Using only FF-GPAC's basic units and their composition rules, it's possible to model the Lorenz Attractor example.}}{38}{figure.caption.34}% +\contentsline {figure}{\numberline {4.7}{\ignorespaces After \textit {createInteg}, this record is the final image of the integrator. The function \textit {initialize} gives us protecting against wrong records of the type \texttt {Parameters}, assuring it begins from the first iteration, i.e., $t_0$.}}{39}{figure.caption.35}% +\contentsline {figure}{\numberline {4.8}{\ignorespaces After \textit {readInteg}, the final floating point values is obtained by reading from memory a computation and passing to it the received parameters record. The result of this application, $v$, is the returned value.}}{40}{figure.caption.36}% +\contentsline {figure}{\numberline {4.9}{\ignorespaces The \textit {updateInteg} function only does side effects, meaning that only affects memory. The internal variable \texttt {c} is a pointer to the computation \textit {itself}, i.e., the computation being created references this exact procedure.}}{40}{figure.caption.37}% +\contentsline {figure}{\numberline {4.10}{\ignorespaces After setting up the environment, this is the final depiction of an independent variable. The reader $x$ reads the values computed by the procedure stored in memory, a second-order Runge-Kutta method in this case.}}{41}{figure.caption.38}% +\contentsline {figure}{\numberline {4.11}{\ignorespaces The Lorenz's Attractor example has a very famous butterfly shape from certain angles and constant values in the graph generated by the solution of the differential equations..}}{42}{figure.caption.39}% \addvspace {10\p@ } -\contentsline {figure}{\numberline {5.1}{\ignorespaces During simulation, functions change the time domain to the one that better fits certain entities, such as the \texttt {Solver} and the driver. The image is heavily inspired by a figure in~\cite {Edil2017}.}}{44}{figure.caption.45}% -\contentsline {figure}{\numberline {5.2}{\ignorespaces Updated auxiliary types for the \texttt {Parameters} type.}}{46}{figure.caption.46}% -\contentsline {figure}{\numberline {5.3}{\ignorespaces Linear interpolation is being used to transition us back to the continuous domain..}}{49}{figure.caption.47}% -\contentsline {figure}{\numberline {5.4}{\ignorespaces The new \textit {updateInteg} function add linear interpolation to the pipeline when receiving a parametric record.}}{50}{figure.caption.48}% +\contentsline {figure}{\numberline {5.1}{\ignorespaces During simulation, functions change the time domain to the one that better fits certain entities, such as the \texttt {Solver} and the driver. The image is heavily inspired by a figure in~\cite {Edil2017}.}}{43}{figure.caption.40}% +\contentsline {figure}{\numberline {5.2}{\ignorespaces Updated auxiliary types for the \texttt {Parameters} type.}}{45}{figure.caption.41}% +\contentsline {figure}{\numberline {5.3}{\ignorespaces Linear interpolation is being used to transition us back to the continuous domain..}}{48}{figure.caption.42}% +\contentsline {figure}{\numberline {5.4}{\ignorespaces The new \textit {updateInteg} function add linear interpolation to the pipeline when receiving a parametric record.}}{49}{figure.caption.43}% \addvspace {10\p@ } -\contentsline {figure}{\numberline {6.1}{\ignorespaces With just a few iterations, the exponential behaviour of the implementation is already noticeable.}}{52}{figure.caption.50}% -\contentsline {figure}{\numberline {6.2}{\ignorespaces The new \textit {createInteg} function relies on interpolation composed with memoization. Also, this combination \textit {produces} results from the computation located in a different memory region, the one pointed by the \texttt {computation} pointer in the integrator.}}{58}{figure.caption.52}% -\contentsline {figure}{\numberline {6.3}{\ignorespaces The function \textit {reads} information from the caching pointer, rather than the pointer where the solvers compute the results.}}{59}{figure.caption.53}% -\contentsline {figure}{\numberline {6.4}{\ignorespaces The new \textit {updateInteg} function gives to the solver functions access to the region with the cached data.}}{60}{figure.caption.54}% -\contentsline {figure}{\numberline {6.5}{\ignorespaces Caching changes the direction of walking through the iteration axis. It also removes an entire pass through the previous iterations.}}{61}{figure.caption.55}% -\contentsline {figure}{\numberline {6.6}{\ignorespaces By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation.}}{65}{figure.caption.58}% +\contentsline {figure}{\numberline {6.1}{\ignorespaces With just a few iterations, the exponential behaviour of the implementation is already noticeable.}}{51}{figure.caption.45}% +\contentsline {figure}{\numberline {6.2}{\ignorespaces The new \textit {createInteg} function relies on interpolation composed with memoization. Also, this combination \textit {produces} results from the computation located in a different memory region, the one pointed by the \texttt {computation} pointer in the integrator.}}{57}{figure.caption.47}% +\contentsline {figure}{\numberline {6.3}{\ignorespaces The function \textit {reads} information from the caching pointer, rather than the pointer where the solvers compute the results.}}{58}{figure.caption.48}% +\contentsline {figure}{\numberline {6.4}{\ignorespaces The new \textit {updateInteg} function gives to the solver functions access to the region with the cached data.}}{59}{figure.caption.49}% +\contentsline {figure}{\numberline {6.5}{\ignorespaces Caching changes the direction of walking through the iteration axis. It also removes an entire pass through the previous iterations.}}{60}{figure.caption.50}% +\contentsline {figure}{\numberline {6.6}{\ignorespaces By using a logarithmic scale, we can see that the final implementation is performant with more than 100 million iterations in the simulation.}}{64}{figure.caption.53}% \addvspace {10\p@ } -\contentsline {figure}{\numberline {7.1}{\ignorespaces Execution pipeline of a model.}}{67}{figure.caption.59}% -\contentsline {figure}{\numberline {7.2}{\ignorespaces Resettable counter in hardware, inspired by Levent's works~\cite {levent2000, levent2002}.}}{70}{figure.caption.60}% -\contentsline {figure}{\numberline {7.3}{\ignorespaces Diagram of \texttt {createInteg} primitive for intuition.}}{73}{figure.caption.61}% -\contentsline {figure}{\numberline {7.4}{\ignorespaces Results of FFACT are similar to the final version of FACT..}}{76}{figure.caption.62}% +\contentsline {figure}{\numberline {7.1}{\ignorespaces Execution pipeline of a model.}}{66}{figure.caption.54}% +\contentsline {figure}{\numberline {7.2}{\ignorespaces Resettable counter in hardware, inspired by Levent's works~\cite {levent2000, levent2002}.}}{69}{figure.caption.55}% +\contentsline {figure}{\numberline {7.3}{\ignorespaces Diagram of \texttt {createInteg} primitive for intuition.}}{72}{figure.caption.56}% +\contentsline {figure}{\numberline {7.4}{\ignorespaces Results of FFACT are similar to the final version of FACT..}}{75}{figure.caption.57}% +\contentsline {figure}{\numberline {7.5}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Simulink implementation~\cite {Simulink}.}}{76}{figure.caption.58}% +\contentsline {figure}{\numberline {7.6}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Matlab implementation.}}{76}{figure.caption.59}% +\contentsline {figure}{\numberline {7.7}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Mathematica implementation.}}{77}{figure.caption.60}% +\contentsline {figure}{\numberline {7.8}{\ignorespaces Comparison of the Lorenz Attractor Model between FFACT and a Yampa implementation.}}{77}{figure.caption.61}% \addvspace {10\p@ } \addvspace {10\p@ } \babel@toc {american}{}\relax diff --git a/doc/MastersThesis/thesis.toc b/doc/MastersThesis/thesis.toc index 12603ed..beb0bee 100644 --- a/doc/MastersThesis/thesis.toc +++ b/doc/MastersThesis/thesis.toc @@ -6,47 +6,48 @@ \contentsline {subsection}{\numberline {1.1.1}Executable Simulation}{3}{subsection.1.1.1}% \contentsline {subsection}{\numberline {1.1.2}Formal Foundation}{4}{subsection.1.1.2}% \contentsline {subsection}{\numberline {1.1.3}Conciseness}{5}{subsection.1.1.3}% -\contentsline {section}{\numberline {1.2}Outline}{7}{section.1.2}% -\contentsline {chapter}{\numberline {2}Design Philosophy}{9}{chapter.2}% -\contentsline {section}{\numberline {2.1}Shannon's Foundation: GPAC}{9}{section.2.1}% -\contentsline {section}{\numberline {2.2}The Shape of Information}{11}{section.2.2}% -\contentsline {section}{\numberline {2.3}Modeling Reality}{15}{section.2.3}% -\contentsline {section}{\numberline {2.4}Making Mathematics Cyber}{17}{section.2.4}% -\contentsline {chapter}{\numberline {3}Effectful Integrals}{20}{chapter.3}% -\contentsline {section}{\numberline {3.1}Uplifting the CT Type}{20}{section.3.1}% -\contentsline {section}{\numberline {3.2}GPAC Bind I: CT}{23}{section.3.2}% -\contentsline {section}{\numberline {3.3}Exploiting Impurity}{25}{section.3.3}% -\contentsline {section}{\numberline {3.4}GPAC Bind II: Integrator}{28}{section.3.4}% -\contentsline {section}{\numberline {3.5}Using Recursion to solve Math}{30}{section.3.5}% -\contentsline {chapter}{\numberline {4}Execution Walkthrough}{33}{chapter.4}% -\contentsline {section}{\numberline {4.1}From Models to Models}{33}{section.4.1}% -\contentsline {section}{\numberline {4.2}Driving the Model}{36}{section.4.2}% -\contentsline {section}{\numberline {4.3}An attractive example}{37}{section.4.3}% -\contentsline {section}{\numberline {4.4}Lorenz's Butterfly}{43}{section.4.4}% -\contentsline {chapter}{\numberline {5}Travelling across Domains}{44}{chapter.5}% -\contentsline {section}{\numberline {5.1}Time Domains}{44}{section.5.1}% -\contentsline {section}{\numberline {5.2}Tweak I: Interpolation}{46}{section.5.2}% -\contentsline {chapter}{\numberline {6}Caching the Speed Pill}{51}{chapter.6}% -\contentsline {section}{\numberline {6.1}Performance}{51}{section.6.1}% -\contentsline {section}{\numberline {6.2}The Saving Strategy}{53}{section.6.2}% -\contentsline {section}{\numberline {6.3}Tweak II: Memoization}{54}{section.6.3}% -\contentsline {section}{\numberline {6.4}A change in Perspective}{60}{section.6.4}% -\contentsline {section}{\numberline {6.5}Tweak III: Model and Driver}{61}{section.6.5}% -\contentsline {section}{\numberline {6.6}Results with Caching}{63}{section.6.6}% -\contentsline {chapter}{\numberline {7}Fixing Recursion}{66}{chapter.7}% -\contentsline {section}{\numberline {7.1}Integrator's Noise}{66}{section.7.1}% -\contentsline {section}{\numberline {7.2}The Fixed-Point Combinator}{68}{section.7.2}% -\contentsline {section}{\numberline {7.3}Value Recursion with Fixed-Points}{70}{section.7.3}% -\contentsline {section}{\numberline {7.4}Tweak IV: Fixing FACT}{72}{section.7.4}% -\contentsline {chapter}{\numberline {8}Conclusion}{77}{chapter.8}% -\contentsline {section}{\numberline {8.1}Final Thoughts}{77}{section.8.1}% -\contentsline {section}{\numberline {8.2}Future Work}{78}{section.8.2}% -\contentsline {subsection}{\numberline {8.2.1}Formalism}{78}{subsection.8.2.1}% -\contentsline {subsection}{\numberline {8.2.2}Extensions}{79}{subsection.8.2.2}% -\contentsline {subsection}{\numberline {8.2.3}Refactoring}{79}{subsection.8.2.3}% -\contentsline {chapter}{\numberline {9}Appendix}{81}{chapter.9}% -\contentsline {section}{\numberline {9.1}Literate Programming}{81}{section.9.1}% -\contentsline {chapter}{References}{83}{section*.63}% +\contentsline {section}{\numberline {1.2}Outline}{6}{section.1.2}% +\contentsline {chapter}{\numberline {2}Design Philosophy}{8}{chapter.2}% +\contentsline {section}{\numberline {2.1}Shannon's Foundation: GPAC}{8}{section.2.1}% +\contentsline {section}{\numberline {2.2}The Shape of Information}{10}{section.2.2}% +\contentsline {section}{\numberline {2.3}Modeling Reality}{14}{section.2.3}% +\contentsline {section}{\numberline {2.4}Making Mathematics Cyber}{16}{section.2.4}% +\contentsline {chapter}{\numberline {3}Effectful Integrals}{19}{chapter.3}% +\contentsline {section}{\numberline {3.1}Uplifting the CT Type}{19}{section.3.1}% +\contentsline {section}{\numberline {3.2}GPAC Bind I: CT}{22}{section.3.2}% +\contentsline {section}{\numberline {3.3}Exploiting Impurity}{24}{section.3.3}% +\contentsline {section}{\numberline {3.4}GPAC Bind II: Integrator}{27}{section.3.4}% +\contentsline {section}{\numberline {3.5}Using Recursion to solve Math}{29}{section.3.5}% +\contentsline {chapter}{\numberline {4}Execution Walkthrough}{32}{chapter.4}% +\contentsline {section}{\numberline {4.1}From Models to Models}{32}{section.4.1}% +\contentsline {section}{\numberline {4.2}Driving the Model}{35}{section.4.2}% +\contentsline {section}{\numberline {4.3}An attractive example}{36}{section.4.3}% +\contentsline {section}{\numberline {4.4}Lorenz's Butterfly}{42}{section.4.4}% +\contentsline {chapter}{\numberline {5}Travelling across Domains}{43}{chapter.5}% +\contentsline {section}{\numberline {5.1}Time Domains}{43}{section.5.1}% +\contentsline {section}{\numberline {5.2}Tweak I: Interpolation}{45}{section.5.2}% +\contentsline {chapter}{\numberline {6}Caching the Speed Pill}{50}{chapter.6}% +\contentsline {section}{\numberline {6.1}Performance}{50}{section.6.1}% +\contentsline {section}{\numberline {6.2}The Saving Strategy}{52}{section.6.2}% +\contentsline {section}{\numberline {6.3}Tweak II: Memoization}{53}{section.6.3}% +\contentsline {section}{\numberline {6.4}A change in Perspective}{59}{section.6.4}% +\contentsline {section}{\numberline {6.5}Tweak III: Model and Driver}{60}{section.6.5}% +\contentsline {section}{\numberline {6.6}Results with Caching}{62}{section.6.6}% +\contentsline {chapter}{\numberline {7}Fixing Recursion}{65}{chapter.7}% +\contentsline {section}{\numberline {7.1}Integrator's Noise}{65}{section.7.1}% +\contentsline {section}{\numberline {7.2}The Fixed-Point Combinator}{67}{section.7.2}% +\contentsline {section}{\numberline {7.3}Value Recursion with Fixed-Points}{69}{section.7.3}% +\contentsline {section}{\numberline {7.4}Tweak IV: Fixing FACT}{71}{section.7.4}% +\contentsline {section}{\numberline {7.5}Examples and Comparisons}{75}{section.7.5}% +\contentsline {chapter}{\numberline {8}Conclusion}{78}{chapter.8}% +\contentsline {section}{\numberline {8.1}Final Thoughts}{78}{section.8.1}% +\contentsline {section}{\numberline {8.2}Future Work}{79}{section.8.2}% +\contentsline {subsection}{\numberline {8.2.1}Formalism}{79}{subsection.8.2.1}% +\contentsline {subsection}{\numberline {8.2.2}Extensions}{80}{subsection.8.2.2}% +\contentsline {subsection}{\numberline {8.2.3}Refactoring}{80}{subsection.8.2.3}% +\contentsline {chapter}{\numberline {9}Appendix}{82}{chapter.9}% +\contentsline {section}{\numberline {9.1}Literate Programming}{82}{section.9.1}% +\contentsline {chapter}{References}{84}{section*.62}% \babel@toc {american}{}\relax \babel@toc {american}{}\relax \babel@toc {american}{}\relax From 7b103a8a71de8298b2e361fdaf205fd16b7ea3ef Mon Sep 17 00:00:00 2001 From: EduardoLR10 Date: Tue, 6 May 2025 20:49:36 -0300 Subject: [PATCH 10/10] Update intro and acknowledgments --- doc/MastersThesis/Lhs/Introduction.lhs | 46 +++++++++++------------ doc/MastersThesis/tex/acknowledgments.tex | 15 ++++---- 2 files changed, 30 insertions(+), 31 deletions(-) diff --git a/doc/MastersThesis/Lhs/Introduction.lhs b/doc/MastersThesis/Lhs/Introduction.lhs index 985afbb..5dd7055 100644 --- a/doc/MastersThesis/Lhs/Introduction.lhs +++ b/doc/MastersThesis/Lhs/Introduction.lhs @@ -37,36 +37,36 @@ By making an executable software capable of running continuous time simulations, Furthermore, this implementation is based on \texttt{Aivika}~\footnote{\texttt{Aivika} \href{https://github.com/dsorokin/aivika}{\textcolor{blue}{source code}}.} --- an open source multi-method library for simulating a variety of paradigms, including partial support for physical dynamics, written in Haskell. Our version is modified for our needs, such as demonstrating similarities between the implementation and GPAC, shrinking some functionality in favor of focusing on continuous time modeling, and re-thinking the overall organization of the project for better understanding, alongside code refactoring using other Haskell's abstractions. So, this reduced and refactored version of \texttt{Aivika}, so-called \texttt{FACT}~\footnote{\texttt{FACT} \href{https://github.com/FP-Modeling/fact/releases/tag/3.0}{\textcolor{blue}{source code}}.}, will be a Haskell Embedded Domain-Specific Language (HEDSL) within the model-based engineering domain. The built DSL will explore Haskell's specific features and details, such as the type system and typeclasses, to solve differential equations. Figure \ref{fig:introExample} shows a side-by-side comparison between the original implementation of Lorenz Attractor in FACT, presented in~\cite{Lemos2022}, and its final form, so-called FFACT, for the same physical system. \begin{figure}[ht!] - \begin{minipage}{0.45\linewidth} + \begin{minipage}{0.5\linewidth} \begin{purespec} -- FACT lorenzModel = do - integX <- createInteg 1.0 - integY <- createInteg 1.0 - integZ <- createInteg 1.0 - let x = readInteg integX - y = readInteg integY - z = readInteg integZ - sigma = 10.0 - rho = 28.0 - beta = 8.0 / 3.0 - updateInteg integX (sigma * (y - x)) - updateInteg integY (x * (rho - z) - y) - updateInteg integZ (x * y - beta * z) - return $ sequence [x, y, z] + integX <- createInteg 1.0 + integY <- createInteg 1.0 + integZ <- createInteg 1.0 + let x = readInteg integX + y = readInteg integY + z = readInteg integZ + sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + updateInteg integX (sigma * (y - x)) + updateInteg integY (x * (rho - z) - y) + updateInteg integZ (x * y - beta * z) + return $ sequence [x, y, z] \end{purespec} \end{minipage} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - \begin{minipage}{0.45\linewidth} + \begin{minipage}{0.49\linewidth} \begin{purespec} -- FFACT - lorenzModel = mdo - x <- integ (sigma * (y - x)) 1.0 - y <- integ (x * (rho - z) - y) 1.0 - z <- integ (x * y - beta * z) 1.0 - let sigma = 10.0 - rho = 28.0 - beta = 8.0 / 3.0 - return $ sequence [x, y, z] + lorenzModel = + mdo x <- integ (sigma * (y - x)) 1.0 + y <- integ (x * (rho - z) - y) 1.0 + z <- integ (x * y - beta * z) 1.0 + let sigma = 10.0 + rho = 28.0 + beta = 8.0 / 3.0 + return $ sequence [x, y, z] \end{purespec} \end{minipage} \caption{The translation between the world of software and the mathematical description of differential equations are more concise and explicit in \texttt{FFACT}.} diff --git a/doc/MastersThesis/tex/acknowledgments.tex b/doc/MastersThesis/tex/acknowledgments.tex index 24083e4..077c4ae 100644 --- a/doc/MastersThesis/tex/acknowledgments.tex +++ b/doc/MastersThesis/tex/acknowledgments.tex @@ -1,14 +1,13 @@ -First and foremost, I thank my main advisor Edil Medeiros. +First, I must acknowledge my main advisor, Edil Medeiros. Since graduation, and now in my masters, he trusted that my effort could go on and beyond, surpassing my own expectations and limits, getting out of my comfort zone. All the endless meetings, including on the weekends, filled with thoughtful advice and helpful comments, -will be the most remarkable memory of the best teacher I have encountered to this day -- title that Edil got back when I was doing graduation and he +will be the most remarkable memory of the best mentor I have encountered to this day -- a title that Edil got back when I was doing graduation, and he continues to be worthy. -I'm thankful to my Computer Science study group, Dr.Nekoma, for the continue joyful programminig practices leveraging functional programming principles, something that is -still the core foundation of the present work, even though this paradigm remains as uncommon both in the industry of software development and in the academia. +I'm thankful to my Computer Science study group, Dr.Nekoma, for the continued joyful programming practices leveraging functional programming principles, something that is +still the core foundation of the present work, even though this paradigm remains uncommon both in the industry of software development industry and in academia. -I'm grateful for the company I'm currently working in, Tontine Trust, where I'm surrounded by real problems being solved in the Haskell; a programming language -that I ended up being a fond of since my final graduation project written in it. +I'm grateful for the company I'm currently working in, Tontine Trust, where I'm a member of a great team delivering a challenging product to the market. Some of these challenges are being solved in Haskell; a programming language +I grew fond of it since my final graduation project was written in it. This work is a continuation of that. -Finally, a special thanks for everybody that took any amount of time to read any draft I had of this thesis, providing honest feedback to enhance my thesis -before the end of my masters. +Finally, a special thanks to everybody who took any amount of time to read any draft I had of this dissertation, providing honest feedback.