I'm back at RC (or perhaps, non-mini RC). The plan is to write every day.
I enjoyed SICP. I wanted to go deeper. Partly for missing out the first time I did RC. Partly due to having worked on infrastructure for the past 2 years and wondering, is there more to life than making things run at scale?
Like the first time I did RC, I'm working through Brent Yorgey's CIS 194. At times I feel going into Haskell is escapism. Let's stick to it for a week and see where this goes.
To quote Lin Clark, WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser. While this breaks JavaScript's 'monopoly', we expect developers to use both WebAssembly and JavaScript in the same application.
WebAssembly fulfils Java's founding promise of "write once run anywhere", with the JVM swapped out by the browser. Running the code in the browser effectively means the compiled .wasm binary is agnostic to the choice of processor and OS.
Why bother?
The first advantage is portability. Existing codebases can be compiled to WebAssembly to be run in the browser. The compiler tool chain with the most support for WebAssembly is LLVM; this benefits languages like C/C++ and Rust. In addition, these 'system languages' have smaller runtimes which in turn compile to smaller .wasm binaries.
The second advantage is speed. Compared to JavaScript, WebAssembly is closer to machine code and has gone through an optimization stage. Furthermore, there is no need for type inference reoptimization since WebAssembly has types. Fetching WebAssembly takes less time because it is more compact than JavaScript, as noted by Figma in a real-world use case.
Cool! What's the catch?
Like all new technologies, WebAssembly can be rough around the edges. WebAssembly supports only ints and floats natively, so relies on JavaScript for string support. For the moment WebAssembly also has no direct access to the DOM. The back-and-forth overhead between WebAssembly and JavaScript can add up, to varying degrees depending on the browser.
It's worth noting how loading .wasm binaries has to be done via JavaScript since cross-scripting rules in the browser blocks reads to the file system. Changes to allow direct loading are in the works, which would make WebAssembly interop more seamless.
Finally, WebAssembly doesn't know how to interact with garbage collectors. Once work to expose bindings to the JavaScript garbage collector is complete, garbage-collected languages can more readily be cross-compiled to .wasm binaries.
Got it. Anything else?
The WebAssembly project has now expanded beyond the browser, with wasmtime as an independent runtime and WASI as the unified systems interface. This could also make .wasm format the new standard in portable binaries, replacing .dylib/.so/.dll files for cross-language bridges.
One wonders, more broadly, if the technology will further consolidate the dominance that large tech companies have, or will a thousand startups bloom? Will there be a lot of end users who like it, but very few love? What will the killer app be, or for that matter, will there actually be one? Only time will tell, but we're excited to find out!
I've always had the occasion to write. My time at RC has provided the occasion to share my writing. For this week's post, instead of sharing what happened, I'll write a bit more on what I'd like to happen.
Thank you for being on this journey with me.
Startups
I'd like to start a startup one day. I don't have a burning idea I'd like to bring into the world, not yet anyway. I do want to be in the best position of making it a success when I do.
In conversations with friends, I'd say how going to a conference to sell is something I'm terrible at. However if the survival of my startup (and the livelihood of my team) depends on it, that would be the time to go above and beyond. To quote Sahil Lavingia:
Being a founder is great for personal growth, because if you don't grow, your business dies.
Making things better
I had been at Square when Yassin Falafel came out. I was so touched by his story, I wrote the following e-mail:
Jack,
I remember when Economic Empowerment came up on our walls, I wasn't sure what the exact message was. Watching Yassin's Falafel, the message hit home.
My wife and I come from Thailand and Malaysia respectively. We feel extremely privileged to have been able to study abroad, and now be based in the Bay Area. Every time we go back to see our families, however, we feel disheartened how much running a business there relies on gatekeepers, and how they disproportionally impact social mobility.
We intend to go back some day and help make things better. Yassin's Falafel reminds us of this. I've always been proud to be at Square, but watching the video made me feel proud the work I do help empower more people like him.
I've been watching a couple of Bret Victor videos recently. In this video, he mentioned how there was a lot of hostility towards assembly when it was first introduced. Assembly allowed machine code to be represented in words, for example writing `add $t1 $t2 $t3` to mean add values in $t2 and $t3 then place it into $t1, instead of say `0000 0001 0000 1001 0101 0000 0010 0001`.
Despite improving developer productivity and reducing likelihood of errors, those who had been programming in binary didn't see much value to assembly. This resistance to new ways of working, to unlearn what you've already learned and think in new ways, is present in 2020 as it was in the 1950s. If anything, it's a more concerning anti-pattern today given how much faster the world is changing.
This thought motivated me to look into category theory. Perhaps understanding more theoretical aspects could help illuminate why things were implemented the way they did. Perhaps doing so would highlight concepts that we take for granted. In last week's post, I mentioned consuming more of Bartosz Milewski's content. I thought I'd complete all three parts before starting Haskell. I'm almost done with Part 1, and then realized I want a bit more intuition on what applying the theory looks like.
Instead of more Haskell, I actually found Chet Corcos' post on functional JavaScript super helpful. A pure function is a function where the return value is only determined by its input values, without dependencies or side effects. He describes how pure functions "bound the cognitive load of programming", because pure functions forces you to be concerned only with the body of the function.
A core concept in category theory is composition; the following is from the preface of Milewski's book.
Composition is at the very root of category theory - it’s part of the definition of the category itself. And I will argue strongly that composition is the essence of programming.
In Corcos' post, he emphasizes how writing code through composition of pure functions makes it much easier to trace errors - you'll get a stack trace through every function all the way to the source of the bug. This is in contrast to object-oriented programming, where the state of the object may not be exposed.
What's very cool is seeing how, in implementing a map operation of two successive functions on a list, the version applying a single map of the composition of the two functions is faster than the version applying two successive maps of each function individually! The sections on lazy loading (with similarly getting performance gains) are fascinating, though perhaps best described in the post itself.
Like all things in life, there's always trade-offs. Gabriel Gonzales' blog Haskell for All has an excellent post on how Haskell is better for long-term productivity because it enforces best practices. I first came across the Option type in Rust, perhaps this was inspired by Haskell's Maybe. In any case, this framing forces you to invest time at the start dealing with edge cases, as opposed to investing time later on when things break.
The post goes on to talk about the flip side of Haskell. I didn't realize Haskell implements its default String type as a linked list (perhaps a counter-example to those claiming linked lists only come up in interviews). While abstraction is a 'good thing', any good thing in excess is bad - Haskell makes advanced abstraction syntactically cheap and thus easy to get carried away. Amusingly there's even a Hitler parody for that...
Content: Naval Ravikant podcasts
There are a number of content formats I've collected but not enough for a series. I'll feature them this week.
First is podcasts. I enjoy listening to Naval Ravikant, the co-founder of AngelList but perhaps best known for his How to Get Rich series of tweets. On Samantha Ryan's podcast, his response to the question of how we decide what to do with our limited time (at 25:39) emphasizes the importance of being honest and effective.
It's very important to be honest with yourself so you can actually succeed without hurting yourself, without being in self conflict all the time and without being ineffective. Because if you're effective and you just get what you want, you can trade it for other things.
On Tim Ferriss' podcast, he elaborates a tweet in the series which talks about gaining financial freedom through ownership of equity. The discussion (at 31:22) echoes Bret Victor above, on the hard part of any endeavor is getting over one's resistance to change.
The hard parts are not the learning, it is the unlearning. It’s not the climbing up the mountain. It’s the going back down to the bottom of the mountain and starting over.
It’s the beginner’s mind that every great artist, or every great business person has, which is: you have to be willing to start from scratch. You have to be willing to hit reset and go back to zero. Because you have to realize that what you already know, and what you’re already doing, is actually an impediment to your full potential.
Content: Sam Altman talks
On the theme of inertia, I learned a lot at Square but I did go through a phase of feeling comfortable. Towards the end of that phase I realized that by not pushing harder I did a disservice to the company, and perhaps more importantly, I did a disservice to myself.
I thought a change of environment would be a good wake up call. I moved from the 3,000+ person company to a 30+ startup. To help prepare the transition I listened to a number of Y Combinator talks, in particular the ones with Sam Altman. This talk is from Work at a Startup Expo 2018 (at 16:20).
Every job I've ever had I've been wildly unqualified for, and doing that is the #1 secret to having a really great career. That's the way you have a super fast rate of personal growth, and I think the way careers go is you should put in the most of the effort at the beginning. Because it's this compound interest-like thing, where the work you do now, the learning the you do now, the improvement you make early in your career gets to pay off for all the rest.
The talk from How to Succeed with a Startup discusses staying positive as a team, stepping up and a bias towards action (at 8:17).
The spirit of "we'll figure it out" is my favorite thing to hear among early startup team members. A lot of things go wrong, the situations that startups win in tend to be incredibly dynamic, and so this idea that even if I'm not qualified on paper, even if I haven't solved this problem before, even if this problem feels like it's going to kill the company (which many problems will feel that way), the spirit among the team of "we've got the people we need, we're gonna figure this out, we're gonna get this done", that's super important.
Content: Credit card processing flow
The final content format is static visuals. I know, spending time in payments makes this flowchart more interesting for me personally. It's also curious that this comes up in an article about Chinese payment apps.
There's a weekly book club at RC on Martin Kleppmann's Designing Data-Intensive Applications. Recent discussions reminded me of things I learned at Bradfield; it's been a fun process of rediscovery.
The first is on how caching affects memory access. Let's start with Moore's law. In the 1960s, everything was slow. Processors got a speedup in over the 70s and the 80s, memory access too but not to the same degree. By the 90s we have a real gap - it's fast to process data but slow to access data, say by a factor of 100.
How did we bridge this gap? Caching. The idea is if it's slow to go all the way to RAM for data, let's move it a bit closer. This was achieved with a piece of dedicated hardware called the cache. Thus we have a 'memory hierarchy' - first registers, next cache, then RAM. Estimates of access times can be found here.
The cache retrieves data a bit like how we get library books. Instead of getting one book at a time, we would get a couple of books at one go. The cache's analogy to the number of books is what's called a cache line. For simplicity let's just say that 64 bytes of data is retrieved at a time and stored on the cache.
Thoughts about caching came up when we were discussing database indexing. We can index a database with binary search trees, but this is inefficient because the node of a binary tree occupies only a fraction of the cache line. This means there'll be more cache misses (when the data we're looking for is not in the cache), compared to cache hits.
B+ trees remedy this by increasing the branching factor, i.e. the number of references to child nodes in each node of the tree. This ensures the whole cache line is used up. In practice the branching factor depends on the amount of space required to store node references and the range boundaries. A toy implementation of a B+ tree in Python can be found here.
Encoding
The second is on encoding. Computers store data in binary, i.e. a collection of 1s and 0s, but binary data by itself doesn't tell us what it means. We could think about it as a series of Booleans. We could also 'chunk' the binary data 8 at a time and have each chunk represent a character. This encoding is called ascii, and can be found by typing `man ascii` in the Terminal. Each ascii character is one byte i.e. 8 bits.
Suppose we were to send integer values to each other, say the number 1000. We could do it in 'text' format, where each character takes up one byte. This means we'll need 4 bytes for "1000". Alternatively we could also send it in 'binary' format, where 4 bytes can range from -2,147,483,648 to 2,147,483,647 (i.e. the range of signed integers given 2^32). In other words, sending integers in binary format is a lot more compact.
Thoughts about encoding came up when we were discussing data encoding formats. A commonly-used format is JSON, which is all text. This keeps things simple but the lack of compactness means it'll be slower to send over the wire. Protocol Buffers, commonly-known as protobuf, allows integers to be encoded in binary format and thus more compact over the wire. This makes a real difference not just for integers but also for binary attachments, like images.
A simple binary encoding is Bencode, which is the encoding used by BitTorrent. Bencode supports strings, integers, lists and dictionaries. Alas it's not that widely used, despite being a "human-readable" binary encoding format. A toy implementation of Bencode can be found here.
Functional programming
It's hard to miss the excitement around Haskell at RC. A few weeks back I told myself I'd like to spend a bit of time on it at my next RC to see what the hype is about. Then I found myself at a stopping point on other projects, and conveniently had Haskell lined up.
Like most things in life, plans only go so far. I came across category theory while reviewing RC discussions on Haskell. I watched the first video in Bartosz Milewski's Category Theory for Programmers series; I was hooked.
I'll admit part of it is Milewski's infectious enthusiasm on the subject. In the preface of his book, he describes how category theory is a "treasure trove of extremely useful programming ideas", which inspired a lot of ideas in Haskell. It's an area of math that's unlike calculus of algebra, because instead of dealing with particulars, it deals with structure. It deals with the kind of structure that makes programs composable.
What I also found quite interesting in Milewski's thread on the 'rise' of functional programming. For a long time we were happy with imperative programming, and use object-oriented programming as the paradigm for abstraction by hiding complexity (in particular, hiding state) inside the object.
The 'end' of Moore's law has since brought multicore processors (on this topic, Sophie Wilson has an excellent talk here). However, imperative programming allows for side effects and hiding state in objects makes data races more likely. While we can carry on doing what we've been doing thus far, it'll be an ongoing struggle in this new concurrent world.
Why needlessly endure complexity? Why not simply shift to functional programming paradigm, where objects are immutable and functions are pure? Instead of building on shaky ground, Milewski's rallying cry is to fix the foundations in order for us to all more effortlessly move forward.
Content: The moral bucket list
David Brooks disarmed me with "I’m a pundit, more or less paid to appear smarter and better than I really am", yet this essay makes me pause to reflect every time.
When I meet such a person it brightens my whole day. But I confess I often have a sadder thought: It occurs to me that I’ve achieved a decent level of career success, but I have not achieved that. I have not achieved that generosity of spirit, or that depth of character.
Content: How to interview your interviewers
I've started doing interviews recently. Thinking about interviews as a chat to get to know potential colleagues can help take the edge off.
That’s because finance, like other forms of human behavior, underwent a change in the twentieth century, a shift equivalent to the emergence of modernism in the arts—a break with common sense, a turn toward self-referentiality and abstraction and notions that couldn’t be explained in workaday English.
In poetry, this moment took place with the publication of “The Waste Land.” In classical music, it was, perhaps, the première of “The Rite of Spring.” Jazz, dance, architecture, painting—all had comparable moments. The moment in finance came in 1973, with the publication of a paper in the Journal of Political Economy titled “The Pricing of Options and Corporate Liabilities,” by Fischer Black and Myron Scholes.
This week involved interview prep, but I got to spend a bit of time thinking about design. My first day at Square involved presentations from various teams, but the message that continues to echo is the one from design director Ty Lettau.
Your job is to absorb as much complexity as you can, and pass the simplicity through.
This philosophy showed its physical manifestation when I had a tour of the hardware lab. The degree of care and attention to detail ranged from the choice of materials (aesthetically pleasing yet durable), to how the product feels in the hand, to the flow from the seller to the buyer and back, to even the embedded software. The devices run on a fork of Android, which amongst other things allowed EMV payments to be fast yet secure, helping to cut down lines at Blue Bottle.
I worked in the Risk team, covering fraud, disputes and recovery. The inner workings under the hood may be cutting-edge machine learning (with lots of arbitrary rules set by intermediaries), but all this was abstracted away from the end user. We continually looked for ways to make the process faster, cheaper, simpler.
Figma (discussed in a prior post) has a Learn Design pilot, which highlights this multi-faceted aspect of design. Naturally I had a soft spot for the section on simplicity, and in particular this excerpt.
Often, creating something simple is more difficult than creating something complex. However, simplicity in design is not necessarily the opposite of complexity, but the revealing of the complex information in a measured and easy to digest way.
Simplicity manifests itself in writing too. I love this quote by Stephen Toulmin.
The effort the writer does not put into writing, the reader has to put into reading.
Add perfectionism to the mix, and you'll end up with lots of drafts and redrafts. On the flip side, that feeling of pride in the end result... priceless.
Content: The Laws of Simplicity
Re: simplicity, there's The Laws of Simplicity by John Maeda (Github profile here). The excerpt is Law No 1.
The simplest way to achieve simplicity is through thoughtful reduction.
Content: The Gap
Re: iterations, there's The Gap by Ira Glass. If it's one thing I'd take away from this blog post, it's this one. The RC version of this is advice from Dave, "when in doubt, code".
Content: The Flower Duet
In being sophisticated, one may go through a phase of faux sophistication. Listening to Delibes on the way there... bliss.
Content: Do we need to have a passion?
FT Weekend used to have a section called The Sage and The Shrink. I've not read this post since the time it was published - writing a blog has truly been a journey of rediscovery.
The most constructive attitude is not “I must find my passion”. Instead pursue what passions you do have, big or small, and keep exploring the things that interest you with an open mind. There is plenty to love and appreciate, whether or not you find “the one”.
Content: High Rise
There are moments when I ask myself, is it too late to become an architect? The Square office was one. The Black Diamond in Copenhagen is another (when sharing the photo, the caption went "If I could marry a building, it would be this one").
The trip to Copenhagen also included a visit to the offices of the architectural firm Bjarke Ingels Group. My friend who worked there described how at that point in time they only participate in competitions. The office was filled with scale models, and it was a joy seeing how the initial concept evolved over time. The highly-interactive website (which used to run on Flash) has parts of this exposition in each project description.
I looked up this New Yorker profile of Bjarke Ingels for the purposes of this blog post (flows nicely from last week's post on Renzo Piano). This excerpt embodies The Sage's advice above perfectly.
Lesson of the week - when screen sharing on Zoom, disconnect additional monitor(s).
I've been working on porting Crafting Interpreters from C to Python, and this naturally involves making some adjustments. First is avoiding circular dependencies in the Python version. I imagine this is less of an issue in C as the linking/compilation stage simply moves all the source code into a single blob. Second is separating out pointers to arrays into arrays and arrays indices.
I'm happy with the progress so far, but now I'm at implementing functions and the interactions are getting a lot more complex. In particular is how variable scopes interact with function call frames; I'm constantly puzzled on how to debug (and if my adjustments overlooked more fundamental underpinnings of how things work). I know, last week I was all about "let's set up guardrails" to make reasoning about the code easier - it's all there now and I'm still stuck.
I'm contemplating starting over with my own custom implementation, but with function support right out of the door (or at least, with the minimal infrastructure needed for to the implementation). I anticipate this involves a lot more sketching designs on paper even before the first line of code - how fun!
The other fun thing with creating your own toy implementation is selecting your own set of reserved words. I've switched from 'var' to 'let' (to suppress prior trauma of learning JavaScript pre-ES6) and from 'this' to 'self'.
WebAssembly
New technology can be rough around the edges. Earlier in my time at RC I wanted to set up Python-Rust interop via WebAssembly. This involves compiling Rust functions to WebAssembly and loading the .wasm binaries in Python, in order to benefit from performance improvements.
I finally got this to work, example here. Calculating the 10,000th prime in Python with Rust achieves a 10x speedup vs pure Python. What's interesting is this closed issue, which might have been what tripped me up previously. The issue close date? August 24, 2020.
Content: What you'll wish you'd known
Paul Graham was invited to speak at a high school in 2005, but somehow the school authorities vetoed the plan. I wonder if they're kicking themselves now.
The excerpt below is a highlight from a recent re-reading of the talk he prepared. On the back of this I'm going to indulge myself on something I've been curious about for a while - design.
If you're deciding between two projects, choose whichever seems most fun. If one blows up in your face, start another. Repeat till, like an internal combustion engine, the process becomes self-sustaining, and each project generates the next one. (This could take years.)
The excerpt that caught my eye on the first reading emphasizes the importance of experience, and draws parallels with Rilke's quote.
If it takes years to articulate great questions, what do you do now, at sixteen? Work toward finding one. Great questions don't appear suddenly. They gradually congeal in your head. And what makes them congeal is experience.
Content: Experiments at Airbnb
Data science at Airbnb had a mixed reputation a few years back, but something they did very well was marketing the practice through content and events. The well-curated blog attracted many to apply for a role. The post I enjoyed the most discussed best practices for A/B testing - on duration of experiments, understanding context and setting up guardrails.
Speaking of experiments and causal inference, I'm adding causal forests to the list of things to review...
Content: Indie Game
I'm fascinated how distribution (or perhaps in a more direct way, monetization) plays a role in content creation. Jonathan Blow in his talk discusses parallels between TV shows and computer games when each medium moves from gatekeepers to direct-to-consumer distribution - highly recommended.
What's also super fascinating is seeing the travails of indie game developers. The attention to detail, the degree of craftsmanship and the pursuit of perfection - I can't help but think of Jiro.
Content: Empty Streets (Haji + Emanuel Remix)
This mix is legendary, and amusingly, apt for 2020.
Content: Think before you build
I remember reading this post, spending hours on Google looking for it again, and feeling very sad when I couldn't find it. When I was compiling the content I wanted to share on this blog, I looked through my notes and there it was. The lone URL, no annotations, no comments. It was like finding treasure. I was overjoyed.
Perhaps there's a dream job there somewhere. Content finder?
The post is about how computers have done wonders for productivity, but in many cases speed compensates for the lack of rigorous thought. It's a reminder to pause and reflect before we write that first line of code, and as per a previous post, to cut through to what matters by thinking clearly from first principles.
What kept me looking for this post were the immortal words of Renzo Piano.
But architecture is about thinking. It's about slowness in some way. You need time. The bad thing about computers is that they make everything run very fast, so fast that you can have a baby in nine weeks instead of nine months. But you still need nine months, not nine weeks, to make a baby.
It's Week 7. I feel deflated. What I find puzzling is lacking motivation even while working on things I find interesting. Perhaps it's a combination of pushing myself a bit harder at the end of the previous week, and how the absence of a work/home divide adds up. I made a conscious effort to sleep and eat better towards the end of the week, and (temporarily) limiting screen time to help me feel more centered.
Timing-wise, it's annoying this coincided with Week 1 for the new Fall 2 batch. I had been looking forward to meeting new people, hear what they plan to work on, and bask in the collective excitement. It's really cool seeing all the new events pop up.
Compilers
I continued to spend time on clox. My plan was to quickly complete the VM implementation of end-to-end, and then go through multiple passes/iterations on the aspects I'd like to focus on. As the implementation got more complex, however, I found myself getting tripped up by the speedy approach. When I implemented local variables, for example, my previously working implementation of global variables no longer worked.
This forced me to go back and invest a little bit more time on guardrails. Debugging with print statements shifted to breakpoints. One-off tests became permanent. The debug flag setting now exposes helpful information, and extended to multiple debug levels. Comments are more verbose, type annotations more widely available.
It's the story of all codebases - the trade-off between speed and best practice, and the introduction of tools to help manage growing complexity. What I didn't realize is the shift kicking in less than a week. In any case, there's enough here for a presentation.
On writing
When I was blogging daily, I would include a motivational post, a career-related post, a general interest post, as well as music and video content over a one week period. I'll try to keep this recurring for the weekly blog posts.
What I couldn't quite fit in thread were obituaries, collected from The Economist. I've not come most of those featured before, but reading their stories help illuminate the challenges of their time. I pause to wonder how our lives (and times) would unfold, and despite our best laid plans, how much serendipity plays a role.
If I could have fit them in, I would start with the story of Naty Revuelta.
Content: My time at Lehman
A thread in Week 6 is professional services, or finance more specifically. My experience mirrors the account by Nick Chirls; while in the trenches it's hard to escape "the lies that people tell themselves so that they can buy larger homes". Extending on the theme of perspective, or the lack of it, is this excerpt.
The people around me measured themselves by one metric: The amount of money he or she made for the firm. Their bonus determined the respect they received. And yet, every last person felt poor.
Content: Advice for IC lifers
Last week we discussed engineering management, this week is advice from Matt Klein for IC lifers.
One last thing: don't let anyone tell you that the tech/engineering is the easy part. It's not. It's hard. Soft skills are also hard. It's ALL hard, and both are required to succeed.
Last week I sat for an internal interview about my career progression to high level IC engineer, with a focus on how I've never felt I needed to become a manager to gain influence. I thought I would share some of my career advice for aspiring IC "lifers." Thread!
It's hard to choose a favorite Sting track. I'll go with Russians; it's haunting and follows nicely content below, on the fall of the Soviet Union.
Content: DTAC
Thais make the best ads. It works too - my first Thai SIM card was from DTAC.
Content: Magic Mountain
At the time I was working in finance, my end goal was to end up as a macro trader. In my idealized view, this involves reading The Economist and then taking a position on how the grand sweeps of history would unfold. I can only guess what the reality is like. As accounts go I enjoy reading the New Yorker on Daniel Arbess, who took the view that "the devolution of Communism would be the single biggest driver of opportunity in our time".
It's the end of my half batch at RC. I had extended my stay from 6 weeks to 12, so today marks the halfway point.
We had an end-of-batch ceremony, but true to RC's philosophy of Never Graduate, it merely marks the start of a new phase. I was touched by the kind words to those making the transition. The nicest words have to go to the RC faculty; I'm amazed at how well they've been keeping the program going despite the lockdown. I wished I could give all of them a hug, or given the new world we now live in, an elbow bump.
I started RC planning to develop my front end skills. I thought this would involve JavaScript and frameworks; it appears I've gone a bit of a detour. That said I'm still on the path of thinking of the browser as the platform, as initially intended.
Week 1 - Getting started
Week 2 - Open source
Week 3 - Rust
Week 4 - WebAssembly
Week 5 - Microbenchmarks
Week 6 - Compilers
I had the chance to answer questions from the incoming batch this week, and excited to be vicariously infused with a healthy dose of enthusiasm next week! RC faculty are fantastic at getting the most interesting and intellectually curious people to join. I mentioned to the incoming batch how I'm a little jealous at my batchmates with interests in domains with well-defined boundaries, it appears they're better able to concentrate their collective efforts and get a lot done that way.
The varied interests on my part continue irrespective. In the morning, I polished up a bit more code from my Python implementation of lox.
I'm not sure what got me started looking at algebraic data types (ADTs), but I spent some time between events translating a Rust example to Python. The PEP 484 change on forward references seems to have made all the difference here. I haven't yet been able to fully wrap my head around "Why bother?", but did discover a thoughtful post by Mark Seemann. Instead of thinking about these more sophisticated types as 'stronger' tests, the better framing is to consider how ADTs make illegal states unrepresentable.
I discovered the RC publication Code Words along the way, with a post on ADTs and one on compilers. I'm reminded of Legal Affairs, an excellent general interest magazine about the law by Yale Law School students. Like Code Words, the publication sadly only lasted a few years.
Re: frameworks, I recently found out Vue has surpassed React in the number of Github stars. I mentioned this amusingly at the Rust learning group, and Vinayak responded by sharing this cool documentary. Since I do use the blog as an accountability tool, my plan is to try out Vue in addition to reading this Evan Youinterview in Increment.
On writing
To become a better developer, there's no way around writing lots of code. Making the effort to write daily has definitely made me feel a lot more comfortable about sharing what I've learned. As per a previous post, the problem I feared was that people might care too much when in reality it's the opposite.
From now on I'll shift to a weekly cadence. I'll try to keep the length reasonable. *wink*
Content: E lucevan le stelle
I also use the blog to embarrass my younger self. I previously claimed a love for opera, thinking I'll emerge as a more cultured version of myself. It's not clear that has happened, but I did come out of this phase with a few favorites. Here's the aria from Tosca, performed by Luciano Pavarotti.
Content: The rise of human-computer cooperation
I know, it's a TED talk (and not Brené Browneither). When I first picked up programming, I was fascinated by the idea how humans and computers excel at different skills. The best teams are usually not the smartest humans or fastest computers working individually, but composite teams that can assess who's better where and have a 'streamlined' human-computer interface. This idea was touched on briefly in the featured article of a previous post, and it's put across really well by Shyam Sankar.
Content: Change is the only constant
When I was making the career transition to software, I constantly wondered if I was making the right move. A part of me wanted to return to professional services; it felt familiar not just to myself but also to my family and friends. It felt safe.
I feel extremely fortunate to have made the shift, not for any sector specifically, but for taking the step towards lifelong learning. If I could do it once, perhaps I can do it again. Not an unlikely proposition as the world changes ever more rapidly.
This FT Weekend article by John Lanchester captures this sentiment perfectly (enclosed below).
The moral of the story is that even if you think you understand the impact economic forces can have, they can still strike dangerously close to home. It helps to have a compass, and mine is based on two principles, both of them learnt from my banker father: anxiety is freedom, and the way you are living will have been your life.
Content: Designing Data-Intensive Applications
This last section of my Friday posts usually feature a book excerpt. Designing Data-Intensive Applications by Martin Kleppmann is not one that I've completed, or at least, not cover-to-cover (conveniently the reading group starts next week). The excerpt I'd like to feature is its excellent dedication.
Technology is a powerful force in our society. Data, software, and communication can be used for bad: to entrench unfair power structures, to undermine human rights, and to protect vested interests. But they can also be used for good: to make underrepresented people’s voices heard, to create opportunities for everyone, and to avert disasters. This book is dedicated to everyone working toward the good.
Learning about compilers has been fun. We started with a stack-based virtual machine that executes bytecode, just completed the scanner (converts source code into tokens), and now getting into the compiler (converts tokens into bytecode).
Right now things still feel a little removed from the Fitzgerald vs Egorov post. The optimization that stood out to me was inlining, where the compilation process replaces a function call with the body of the function itself (since you don't have functions in machine code). Egorov achieved this in JavaScript by stringifying a function to get its source text; Fitzgerald noted in Rust this simply involves annotating the function with #[inline].
Stack-based VMs are simple yet can elegantly do a lot! Register-based VMs, in contrast, are more complex but can achieve better performance. I amuse myself thinking about the time I reviewed my notes on MIPS (register) thinking it could help me understand WebAssembly (stack). Bob Nystrom's all-time favourite CS paper is actually on this topic, on Lua moving from a stack-based to a register-based instruction set.
Professional services
Paul Graham's essay described how the 1980s saw a shift from large corporations being the most desirable employers, to professional services. I think of this topic fondly. My very first job was in finance, which was the default of sorts for college graduates then.
The progression in most professional services roles has junior employees executing what the client asks for, and senior employees managing client relationships. I liked the execution part, and always wondered, "What if I'm not into golf?" (cue Mad Men episode). It's curious how for some cases this model is flipped - Google ICs goes all the way up to Senior Fellow i.e. Level 11.
Content: Engineering management
We've covered data science, product management, design, dev ops and data engineering so far. I've saved this one for last - engineering management. I love Julia Evans' zine on the topic, as well as this excellent post by Raylene Yung.
I wouldn't say I have a burning desire to be a manager. That said, I do feel (1) there are lots of soft skills that get honed when you're responsible for your team's success, and (2) having that experience helps you empathize with your own manager. Plus think of all of the books that now become more interesting...