Switching Careers to Software Engineering: A Retrospective

    written by Zach Morrissey on 2018-08-01

    I’ve just been hired into my first dedicated software engineering role. After returning to school for CS in November 2016, I’ve just emerged on the other side almost two years later. This is the first thing I’ve done in my career that feels like it fits how I think and work, and I’m planning on making the most of it.

    My approach:

    • Move into a semi-technical role where I had better access to engineering (TPM / BI Analyst).
    • Start a CS degree. I opted for the Oregon State Online Post-bacc in CS, which is an excellent program that I highly recommend.
    • Work up a sizable resume of personal projects that prove that you can do the work (I’m still working on this bit. This blog is one of those).

    And it I’m glad to say it worked!

    Three Important Aspects of Switching into Software Development

    Here’s the different angles I took on understanding the career switch as I went through it. It’s an odd one, but depending on where you’re transitioning from, it can be an excellent choice.

    The Learning Aspects

    Compared to many types of career you’d try to switch into, many people switching into CS will be touching a STEM-type field for the first time in their lives. Comparably, it’s one of the more accessible sciences to get into because computers are ubiquitous and drive everything in the world today. I’ve spent a lot of the last two years just learning as much as I could.

    • First and most obviously, it’s hard. Few people can do it quickly. It takes most people a few years at least. When you hear stories about people who were grinding Leetcode for 3 months and got the offer, know that they’re vastly outside the normal distribution of how long it takes people.

    • The problem domain is vast. You quickly get into territory for which there is no set solution, everything has a trade-off, and it’s often difficult to espouse why. This is a significant generalization, but the breadth of problems that I’ve had to solve in my short time doing software engineering vastly eclipses most roles I’ve had or worked with previously. There’s so much to know, and you’ll likely never be an expert in more than a few small subdomains of the large problem space.

    • There are few, if any, scenarios today where you’ll happen to fall into software engineering without significant, dedicated effort. It’s common for senior engineers to mention how they may have done this back in the day when they got their start programming for Amiga, Commodore 64, or in the early days of the web. This represents a minority of those career switching into development.

    • There are plenty of ways to get the education for it. Bias showing, I recommend spending the time and money to go to school through an accredited college/university. Boot camps are enticing in that they’re quick turnarounds, can get you very career-ready compared to school, and are generally much less expensive. In general, I’d be cautious of the drastic uptick in Boot Camps that have sprung up as of late. There are plenty of excellent ones, but there are also enough bad ones to offset those. You may find yourself at a disadvantage for any job that requires a solid foundation in data structures/algorithms/operating systems/etc. Boot camps generally focus very heavily on web development, and usually teach a single ecosystem (i.e. Node/Express, Ruby on Rails, etc).

    The Technical Aspects

    There’s a lot of different tech out there and you’re not going to learn it all. There’s likely a lot of stuff you’ll never touch in your lifetime. It’s a very wide field with tons of specializations, most of which you’ll probably have limited ability to gain meaningful experience in school or boot camp or being self-taught. Seeking out which technologies you want to learn/use is good, but with some caveats:

    • As a junior, being opinionated about which technologies you want to use will limit your opportunities. If you don’t particularly like Microsoft, or don’t like Java, you may cut yourself off from gaining experience. You can be selective if you so desire, but it will restrict your career in the short-term. Getting that sexy startup job with 2-years experience in any sort of software development role will be significantly easier than having none.

    • The single most advantageous skill is being resourceful and looking for answers. Good engineers are the ones who can use the resources available too them (Google, stackoverflow, Github, etc) and make decent, working code out of it.

    • Know at least one low-level or compiled language and at least one high-level or interpreted language. The concepts from the lower level languages you learn will be applicable to using the high-level language. Someone using a high-level language may never know why something like copying an array is slow, because they’re not aware of how memory works in relation to the code you’re writing. Writing C or a similar low-level language quickly eradicates many common misunderstandings.

    • Buy/build a home server or get an AWS account. Start tinkering as a way of learning. Set up some services and have fun.

    • Learn databases and SQL. Most applications you use will interact with a database in some manner. It’s easy to get proficient.

    • Get good at setting up your working environment. Having a great local setup using tools you’re comfortable with pays significant productivity dividends. There’s a reason that developers are so choosy about which tools they like.

    The Human Aspects

    • Managers outside of development won’t help your career switch. If you’re working on improving your skills to the point where you could start as a developer (by school, code boot camp, etc), they will often espouse support but generally have no idea what that means they should do to help you. Doesn’t mean you shouldn’t be honest with them about your intentions, but don’t expect them to come to you with solutions.

    • Mentorship is exceedingly difficult to come by. You must purposefully seek it out if you want it. If you end up having an opportunity with someone who shares an interest in your future, nurture that flame. Let them know you’re interested. It’s easy to feel defensive about being inexperienced. This is a nice way to gain some perspective on what that means for you right now.

    • When you hear about software engineer hiring shortages, they’re referencing senior engineering talent. Junior roles can often be saturated with new listings all the time, and suffer from a glut of genuinely unqualified applicants. That means people who list things on their resume they have zero experience with, to the point they haven’t even done a Udemy tutorial on them.

    • There’s a lot of noise out there. Many people, with many opinions (including this one), and lots of people will tell you the best approach. If there’s only one thing to keep in mind, it’s to be patient with yourself and understand that nobody else is in your exact scenario.

    Switching into software development these days can feel like jumping on a bandwagon where everyone is smarter than you, paid better than you, and has intense opinions on what is right and wrong. It can feel like an insurmountable hill to climb.

    However, there are plenty out there like myself for which software engineering is an open door to a much better career. Once I’ve been in the role for a while, I’ll have a part two to this post where I cover my transition period.

    My Very Own Airflow Cluster

    written by Zach Morrissey on 2018-06-25

    As someone who writes a lot of one-off batch scripts, the rise of DAG (Directed Acyclic Graph) task engines has made my life a lot better. I’ve used Luigi at work for about 2 years now. For feeding the backend of this website with data, I decided to set up an Airflow cluster.

    Airflow screenshot

    As opposed to this paradigm:

    • Write a script.
    • Set it up in cron.
    • Check the logs whenever something goes wrong.

    You get a whole lot more bang for your buck, with similar amounts of work. DAG paradigm is more like:

    • Write a script.
    • Write an airflow DAG, where you call that script.
    • Set a schedule for the script.
    • Check on it from the built-in Web UI.
    • Stop and restart tasks whenever you want to.
    • View the console output from the Web UI.

    It really helps certain types of batch processes scale past a certain point, while simplifying the process of managing and deploying them. You can relatively easily set up Airflow to schedule thousands of tasks without all that much more configuration past what you’d normally write.

    The Good and Bad Reasons to Use Airflow

    There’s a number of gotchas you should know prior to using one of the DAG engines, as they all fall into the same traps.

    • cron is incredibly difficult to beat for reliability. There’s a reason it’s ubiquitous. DAG engines are usually for tasks that need to scale past what cron is built for. If you’re just trying to run a single script on the regular, stick with that instead.
    • DAG engines are meant for batch programs. If you need anything real-time, look elsewhere.
    • Usually there’s a non-trivial amount of setup involved at the beginning.
    • Sometimes scaling can be a pain. For Airflow, you’ll need to setup a task queue like Celery and a number of different nodes (web server, database, etc) once you scale past a certain point.
    • There’s a relatively mature Docker image that you can reuse if you don’t want to configure it yourself.

    Diving In

    There are plenty of ways that you can configure Airflow. There are varying levels of complexity for varying levels of requirements, specified by the Executor types (i.e. what does what in your scenario).

    • SequentialExecutor - Uses SQLite as a backend, and executes tasks for testing/debugging. This is generally what you’d run on your own machine if you’re just testing a workflow.
    • LocalExecutor - Threaded local executor that uses just a DB connection as a backend.
    • CeleryExecutor - For distributing tasks out to worker nodes. This uses Celery as a queue. Requires you to set up other nodes as worker nodes.

    Here’s diagram of how it works.

    Airflow architecture

    For my purposes, I went with LocalExecutor, and set up the configuration in the outlined area above. I set up two nodes for this in my Proxmox cluster.

    1. Airflow web server. This was just based on a Ubuntu 16.04 instance.
    2. Postgres backend. For the LocalExecutor option, this is the only additional node required.

    The LocalExecutor option (configured for using Postgres as a backend in this instance) has probably the highest payoff-to-effort ratio, compared to how difficult it can be to set up the CeleryExecutor. Once you’ve created the database, verified the connection works, and run airflow initdb, you’re good to go.

    Now you can start making DAGs.

    DAGS, Operators, Upstream, Huh?

    There’s a lot of lingo to be learned when using Airflow.

    • DAG - This is a single workflow, wherein you can arrange tasks and dependencies.
    • Operator - This is a single unit of work that can be instantiated multiple times to achieve a particular output/goal/etc. There are things like BashOperator for executing shell scripts, or PythonOperator for python files, etc.
    • DAG Run - A single execution of a DAG of tasks. Each operator in the DAG is instantiated and executed as it’s dependencies are completed.

    Example DAG

    Fittingly, the DAG is a graph, both directed and acyclic. Task execution flows in only one direction.

    An Airflow dag instance
    • The arrows indicate execution order.
    • An arrow from task 1 to task 2 implies task 1 is a dependency for task 2. If task 1 fails, task 2 will not execute.
    • Nodes can have multiple dependencies, like node 4 above.
    • Nodes can have multiple downstream tasks, like nodes 1 and 3.
    • Task execution timing is non-deterministic, so there’s no guarantee for whether sibling tasks 2 or 3 would execute first.

    In Code

    Here’s some example statements that would make a dag resembling the one in the graphic above.

    default_args = {owner='airflow'}  # server user account where this is run
    dag = DAG(
        'example', default_args=default_args)
    t1 = BashOperator(task_id='t1', bash_command='echo "task 1"', dag=dag)
    t2 = BashOperator(task_id='t2', bash_command='echo "task 2"', dag=dag)
    t3 = BashOperator(task_id='t3', bash_command='echo "task 3"', dag=dag)
    t4 = BashOperator(task_id='t4', bash_command='echo "task 4"', dag=dag)
    t5 = BashOperator(task_id='t5', bash_command='echo "task 5"', dag=dag)
    
    # now arrange the tasks in the dag
    t2.set_upstream(t1)
    t3.set_upstream(t1)
    t4.set_upstream(t2)
    t4.set_upstream(t3)
    t5.set_upstream(t3)
    

    Now that this is defined, you can instantiate a DAG Run from the webserver.

    Notes On Running Airflow

    There have been a lot of justifiable claims against Airflow’s deployment story; it can be very frustrating. Scaling Airflow is confusing and fault-prone. For those who don’t want to deal with the headache of managing the installation on their own, Google offers a hosted Airflow services called Cloud Composer, or you can look into alternatives like Luigi or Mara. While I don’t fully love Airflow, I still haven’t found something better.

    Headaches aside, it’s still worth using a DAG engine because of the great improvements in both the quality and visibility of batch processing jobs.

    Features to Avoid

    Since Airflow has been in the Apache Incubator for the past year or so, it’s been getting a lot of features rapidly. I choose to avoid these ones:

    • X-coms. The biggest downside of Airflow compared to Luigi is that it doesn’t force you to write idempotent tasks in the same way. It’s easy to accidentally end up writing something that has outputs which don’t reproduce. If you treat a task like a function with a single input and output, it works best.
    • Branching Operators - If you need to choose between multiple sets of logic, it’s best just to dynamically set your task dependencies at runtime, as opposed to creating a branching task.
    • Bitshift Operators as Upstream/Downstream - Airflow recently introduced a way of setting upstream/downstream tasks by using bitshift operators like << or >>. This is bad. Explicit is good! set_upstream() is not particularly verbose.

    In Conclusion

    If you’re frustrated with cron or other simple ways of deploying batch processes, give it a shot. Be aware of the shortcomings, though.

    The Switch to Webpack 4

    written by Zach Morrissey on 2018-06-09

    Gulp to Webpack Transition

    I’ve always been interested in using Webpack to generate this site, but the heavily config-oriented earlier versions of Webpack (2.x, 3.x) were a hassle. The explicit pipelines from Gulp were always significantly easier. Yet, after using Nuxt.js for a recent project–which works exceedingly well with Webpack–I decided that it was time to jump on the bandwagon.

    Reasons for Switching

    Gulp is very good at creating flexible tasks to be run on a regular basis. Webpack and Gulp, however, are two separate tools with completely different aims.

    The Good

    • Outside of NPM itself, Webpack is the most popular build tool today, with about a 45% increase in users who have used it and would like to use it again.
    • Webpack has a built-in integration with Lektor, it made for a better development story once all the kinks were worked out.
    • Webpack is a module bundler by default. If you orient your assets in a way that Webpack can consume them, you’ll end up bundling their outputs nicely into a single file like you expected.
    • Webpack 4 has ‘zero config’ options. The configuration file is a well-known pain point for Webpack, as it tends to create a lot of headaches very quickly. Being able to replace my ailing, convoluted Webpack 2/3 config files with defaults was appealing to me.
    • The move to ‘convention over configuration’ is approaching. Sane defaults, opinionated by common methods, and tooling built for the majority of use cases.
    • Includes splitting into ‘prod’ and ‘dev’ modes by default. It was always possible, but this is a major part of the webpack API now.

    The Bad

    • Webpack documentation is confusing, sparse, and littered with examples that either make no sense or have been broken by version upgrades. Finding working examples is very difficult.
    • NPM module configuration for Webpack is a major source of frustration, because libraries are getting replaced & deprecated so quickly that what may have worked today is not guaranteed to work a few months from now.
    • NPM package installation and management is still more difficult than it seems like it should be. Since NPM packages take a more Unix-like ‘single function’ approach, it drastically increases the amount of dependencies per project (this website has 754 dependencies right now).
    • Transitioning from a working Webpack 2/3 config to Webpack 4 (either using config or zero-config options) is difficult.

    What does it look like?

    Previously - Gulp

    Gulp pipelines usually have 1 or more inputs of the same filetype, and then produce explicit outputs using pipes.

    // Process main scss for the site.
    gulp.task('main-css', function() {
        return gulp.src('./src/index.scss')
            .pipe(rename('main.min.css'))
            .pipe(source_maps.init())
            .pipe(sass({ outputStyle: 'expanded' }))
            .pipe(clean_css())
            .pipe(source_maps.write('.'))
            .pipe(gulp.dest('../assets/css/'));
    });
    

    Gulp tasks operate on inputs and outputs, so you don’t necessarily need to set things up so that they rely on one another.

    Now - Webpack

    Whereas Webpack configs look to an entrypoint (normally ./src/index.js by convention) and bundle all of the dependencies for that file.

    const path = require('path');
    const MiniCssExtractPlugin = require('mini-css-extract-plugin');
    
    module.exports = {
      mode: "production",
      output: {
        path: path.dirname(__dirname) + '/assets/static/gen',
        filename: "[name].js"
      },
      plugins: [
        new MiniCssExtractPlugin({
          filename: "main.css",
          chunkFilename: "[name].css"
        })
      ],
      module: {
        rules: [
          {
            test: /\.scss$/,
            use: [
              MiniCssExtractPlugin.loader,
              'style-loader',
              'sass-loader'
            ]
          }
        ]
      }
    };
    

    In prior versions of Webpack it was required to make many more things explicit, compared to now where you can opt to use conventional inputs/outputs instead of specifying them.

    Lessons Learned

    The biggest struggle in using Webpack is bridging the gap between the concepts and the implementation.

    • The concepts are well established and internally consistent. Check out the documentation if you’re curious about what any of the terms / verbs used in Webpack-speak mean.
    • The implementation is where things are difficult. It’s very tough to translate the concepts espoused in how Webpack should operate to how it actually does operate.
      • Things like ‘entry’ or ‘loaders’ are simple enough to understand, but are difficult to translate into working functionality.
      • Error messages need improvement. It’s very tough to take information there and use it.

    That said, considering the difficulties, module bundling with Webpack is still worth it even after all of that. Simplifying your build process from separate pipelines using Gulp/Grunt/etc into a single build process using a dependency graph is a drastic improvement in paradigm.

    If you’re just trying to process some Sass files, I would stick to Gulp.

    Discovering PICO-8

    written by Zach Morrissey on 2018-05-03

    body:

    The PICO-8 is a ‘fantasy console’, which is basically a virtual machine for programming videogames with a set amount of memory, disk space, screen size, etc. I discovered it after I recently played the excellent Celeste for Nintendo Switch and learning that it started life as a PICO-8 game. This thing looks damn fun to program for, and it comes batteries-included for most tasks so you can get up and running in no time.

    PICO 8 Illustration From Lexaloffle

    Highlights

    • Lua scripting.
    • 16-color palette.
    • 128 x 128 pixel screen size.
    • 32k cartridge sizes.
    • 4-channel sound.

    On top of all that, the VM that runs the PICO-8 cartridge has been ported to hardware! It’s being included in all $9 C.H.I.P. computers by default now, so you can actually get a hardware implementation of the PICO-8 fantasy console for use in your own home.

    People make some suuuuuuuuuuper cool stuff with it.

    Let’s Give It a Shot

    Starting out, I checked out this instructional youtube playlist by TeamworkCast. It helped me get an idea of what I wanted to create with PICO-8 (alongside being super entertaining). So now I was in a place where I had to figure out:

    • Game programming patterns. How do people make games, well,
    • Designing movement and obstacles.
    • Sounds & sound editing.

    Super easy, right? Well, no. Turns out people spend their entire lives getting good at these things. I’ve been doing a lot of research into how to get better at it though.

    Resources Used

    The things that helped me learn PICO-8, by subject area:

    • PICO-8 API by Neko250. A neat little reference for helping figure out the calls you can make from PICO-8 itself.
    • /r/pico8 subreddit - An exceedingly helpful community of people who are both PICO-8 game designers and enthusiasts.
    • PICO-8 Cheat Sheet - A quick reference made by the users of /r/pico8.
    • Lua for Programmers Cheat Sheet - Admittedly, grasping with the overwhelming simplicity of Lua was a little frustrating at first, but having a slew of examples helped.
    • Awesome PICO-8 - One of the curated awesome lists that are found on github. This one in particular has a lot of good resources for starting up.

    The PICO-8 Interface

    The first place you’ll land is the command line in PICO-8, which is used to launch all of the other screens. It looks exactly like you’d expect it to.

    PICO 8 Command Line

    There’s also a built-in IDE for developing games! This is where you can write your lua code, edit sprites, create background tiles and more.

    PICO 8 Integrated Development Environment

    Multiple other pages in the editor exist for purposes of creating assets for these two pages: A sprite editor, a tile editor, a SFX editor, and a music arrangement screen. Combining all of these, you can make pretty convincing graphics & sound pretty quickly.

    Experimenting

    Since I don’t have the depth of experience to be able to make a legitimate video game yet, I’m focusing on doing exceedingly simple stuff for the time being.

    There are basically only three functions to know:

    • _update() - Update the game state. Change objects position, behavior, etc. to match the game state.
    • _draw() - Give directions for what to draw on screen.
    • _init() - Initialize game state (only if you need to).

    Using these, I ended up trying to do what most people try to do their first time using any 2D graphics program: make a ball bounce across the screen.

    PICO-8 code to bounce a ball.
    PICO-8 bouncing ball

    It works! But hey - this isn’t interactive, so we can’t quite call it a game yet. Here’s how we’ll do that.

    • Add some listening for button presses using btnp(). This determines whether or not a button was pressed that was not pressed during the prior frame.
    • Change the velocity based on that button. This time we’re adding a random positive velocity to the ball.
    • Display some message or feedback so that I can tell I implemented it correctly.
    PICO-8 bouncing ball

    Nice. But now, since this is a videogame we want to update a few more things:

    • Run in 60FPS by using the _update60() function instead of the update function. This makes our game run a lot smoother.
    • Add more natural slowdowns to the bounces. As the ball bounces off the floor, it should lose some of its velocity.
    • Add some more x-axis movement. If the player hits the button, have it send the ball off in some random velocity along the x-axis.
    PICO-8 bouncing ball

    What’s Next? (To be continued in Part 2)

    Now that I’ve got the bones in place, I’m planning on creating a barebones clone of Flappy Bird for Part II of this blog post.

    What’s the way to get there?

    • Change the sprites to be objects. The ball itself should have an internal state that captures which sprite should be displayed at any point, and the collisions between it and everything else.
    • Functional decomposition. It’s easy to write a lot of spaghetti code in pulling together simple demos; the way to scale this is to change the major parts to objects.
    • Create wall-generation logic. Flappy Bird works on a randomly-generated basis, so I’ll need to create and manage a list of obstacles for the player to surpass.

    Check it out on the next version of this post!

    Hosting a Static Site on S3 & Cloudfront

    written by Zach Morrissey on 2018-04-25

    Considering this site is built with a static site generator, I needed to find somewhere to host it. Amazon’s S3 service, as it turns out, actually has an option to host a website out of it, which is ridiculously convenient for those who are comfortable using AWS services. Prior to this, I had purchased my domain through Namecheap.

    Requirements

    S3 + Cloudfront met all of my pretty simple set of requirements that I wanted to enforce:

    • HTTPS - Site has to be secured with a valid certificate.
    • Custom Domain - For professionalism and consistency.
    • Easy Deployment - Should be able to push files easily, or have a CI/CD tool listen to my git repo to push things up.

    Hosting in S3 seemed to be a very straightforward way of achieving these things, so I decided to go with it.

    Setting Up S3 for Static Web Hosting

    Setting up S3 for hosting a static site is as simple as checking a box. First create a bucket for your site.

    Static Bucket Settings

    On that menu, select your default landing document (something along the lines of index.html) and error document (i.e. 404.html or something like that).

    Website Options

    Setting Bucket Permissions

    Once you’ve created your static site bucket and configured it for hosting, you need to set permissions on the bucket to allow GET requests.

    Under the permissions tab, edit your permissions to allow s3:GetObject permissions to public.

    S3 Bucket Policy Editor

    This will make this bucket public. Do not keep sensitive data of any sort in it once you’ve made this change.

    Don’t Need HTTPS? You’re Done!

    At this point, if you’re not interested in setting up your site to work with HTTPS, then you can call it quits at this point. S3 will provide you with a public URL that you can reach to see the site now:

    http://<your_bucket_name_here>.s3-website-<region>.amazonaws.com/

    You can point your domain at this URL and boom, you’re done.

    Creating a Cloudfront Distribution

    If you do want HTTPS, and you should want HTTPS, on your site, you’ll need to create a Cloudfront distribution, Amazon’s CDN, in order to provide secure connections.

    Create a new web distribution, with the following options selected:

    • Origin Domain Name - Amazon will automatically populate your S3 bucket in the dropdown, but you should use the full S3 website URL: <your_bucket_name_here>.s3-website-<region>.amazonaws.com
    • Viewer Protocol Policy - Choose what you want to support, but I chose ‘Redirect HTTP to HTTPS’ here.
    • Alternate Domain Names (CNAMEs) - This is how you’ll link your domain to the Cloudfront distribution, so you should put your domain in here.
    • Default Root Object - This should be the same as the landing document you specified in your S3 bucket website option (i.e.index.html).
    • Price Class - One noted option if you want to save on hosting costs is to select the US/Canada/Europe only price class.
    • Custom SSL Certificate - By default, the distribution uses the Cloudfront certificate, but you should request your own if you’re hosting on a custom domain. Which brings us to our next section…

    Request an SSL Certificate

    In order to secure your site with HTTPS, you can request a certificate through AWS Certificate Manager for your domain. You’ll have to be able to prove you own the domain either through DNS validation or by email validation. I used email validation.

    There’s a number of email addresses that AWS will send out a confirmation email to, listed here. Your domain provider should have options for forwarding email if you don’t have it set up through them.

    Once you’re done with the certificate business, you can submit that Cloudfront distribution, which will take some time to set up.

    Registering Your Cloudfront Distribution with Your Domain Provider

    Once your cloudfront distribution is set up, you should now have a URL you can access with your site like this:

    a983qewr90asf.cloudfront.net

    Once you’ve got that URL, all you need to do is set up CNAME records in your domain provider, which will allow your domain to redirect to the Cloudfront distribution. For my site, I wanted to set it up using the bare domain, i.e. zmsy.co instead of www.zmsy.co.

    I set up a CNAME record and a URL redirect in this way:

    TypeHostValue
    CNAME@a983qewr90asf.cloudfront.net
    URL Redirect@https://a983qewr90asf.cloudfront.net

    That’s a Wrap!

    You’ll be up and running once your domain is redirecting correctly. Now, once you push objects up to S3, they should populate out to the Cloudfront distribution automatically (albeit sometimes at a random schedule, since it’s a CDN that operates on its own time). Plenty of toolkits have S3 as a built-in endpoint, so setting up deployment either through a CLI or from a CI/CD server is relatively straightforward.

    At the time I wrote this, Github Pages wasn’t offering HTTPS but have just announced support, and was primarily built using Jekyll. I’ll be checking that one out soon too.