Connect with us


Managing technical quality in a codebase.

Mish Boyka



If there’s one thing that engineers, engineering managers, and technology executives are likely to agree on, it’s that there’s a crisis of technical quality. One diagnosis and cure is easy to identify: our engineers aren’t prioritizing quality, and we need to hire better engineers or retrain the ones we have. Of course, you should feel free to replace “engineers” with “Product Managers” or “executives” if that feels more comfortable. It’s a compelling narrative with a clear villain, and it conveniently shifts blame away from engineering leadership. Still, like most narratives that move accountability towards the folks with the least power, it’s both unhelpful and wrong.

When you accept the premise that low technical quality results from poor decision-making, you start looking for bad judgment, and someone at the company must be the culprit. Is it the previous CTO? Is it that Staff Engineer looking at you with a nervous smile? Is it everyone? What if it’s none of those folks, and stranger yet isn’t even your fault either?

In most cases, low technical quality isn’t a crisis; it’s the expected, normal state. Engineers generally make reasonable quality decisions when they make them, and successful companies raise their quality bar over time as they scale, pivot, or shift up-market towards enterprise users. At a well-run and successful company, most of your previous technical decisions won’t meet your current quality threshold. Rather than a failure, closing the gap between your current and target technical quality is a routine, essential part of effective engineering leadership.

The problem

As an engineering leadership team, your goal is to maintain an appropriate technical quality level while devoting as much energy as possible towards the core business. You must balance quality across multiple timeframes, and those timeframes generally have conflicting needs. For example, you’ll do very different work getting that critical partnership out the door for next week’s deadline versus the work you’ll do to build a platform that supports launching ten times faster next quarter.

Just as your company’s technical quality bar will shift over time, your approach to managing technical quality will evolve in tandem:

  1. fix the hot spots that are causing immediate problems
  2. adopt best practices that are known to improve quality
  3. prioritize leverage points that preserve quality as your software changes
  4. align technical vectors in how your organization changes software
  5. measure technical quality to guide deeper investment
  6. spin up a technical quality team to create systems and tools for quality
  7. run a quality program to measure, track and create accountability

As we dig into this toolkit of approaches, remember to pick the cheapest, most straightforward tool likely to work. Technical quality is a long-term game. There’s no such thing as winning, only learning and earning the chance to keep playing.

Ascending the staircase

There’s a particular joy in drilling into the challenge at hand until you find a generalized problem worth solving. However, an equally important instinct is solving the current situation quickly and moving on to the next pressing issue.

As you think about the right quality improvements to make for your team and organization, it’s generally most effective to start with the lightest weight solutions and only progressing towards massive solutions as earlier efforts collapse under the pressure of scale. If you can’t get teams to adopt proper code linting, your attempts to roll out a comprehensive quality program are doomed. Although the latter can be more effective at scale, they’re much, much harder to execute.

So, do the quick stuff first!

Even if it doesn’t work, you’ll learn more and more quickly from failing to roll out the easy stuff than failing to roll out the hard stuff. Then you’ll get to an improved second iteration sooner. Over time you will move towards comprehensive approaches, but there’s no need to rush. Don’t abandon the ease, joy, and innocence of early organizations for the perils of enterprise-scale coordination without proper need.

It’s convenient to present these phases are a linear staircase to be ascended, but that’s rarely how real organizations use them. You’re more likely to fix a quality hot spot, roll out a best practice, start running an architecture review, abolish that architecture review, and go back to hot-spotting for a bit. Premature processes add more friction than value and are quick to expose themselves as ineffective. If something isn’t working, try for a bit to make it work, and then celebrate its demise.

Hot spots

When confronted by a quality problem, the first instinct is often to identify a process failure that necessarily requires a process solution. If a deployment causes an outage, it’s because the author didn’t correctly follow the code test process, so now we’re going to require tests with every commit – that’ll teach those lazy developers!

There’s the old joke about Sarbannes-Oxley: it doesn’t reduce risk; it just makes it clear who to blame when things go wrong. Unfortunately, that joke applies without humor to how many organizations roll out processes. Accountability has its role, but it’s much more important to understand the problem at hand and try to fix it directly than to create process-driven accountability.

Process rollout requires humans to change how they work, which you shouldn’t undertake lightly. Rather than reaching for process improvement, start by donning the performance engineer’s mindset. Measure the problem at hand, identify where the bulk of the issue occurs, and focus on precisely that area.

The previous example of an untested deploy might benefit from giving direct feedback to the deploying engineer about changing their testing habits. Alternatively, maybe you’re better served by acknowledging that your software design is error-prone and adopting the “define errors out of existence” approach described in A Philosophy of Software Design.

If you have a development velocity problem, it might be optimizing test runtimes, moving your Docker compile step onto a RAM disk, or using the techniques described in Software Design X-Rays to find the specific files to improve.

Systems thinking is the most transformative thinking technique I’ve encountered in my career. Still, at times it can be a siren beckoning you towards fixing a current system you may be better discarding. Sure, you can roll out a new training program to teach your team how to write better tests, but alternatively, maybe you can just delete the one test file where 98% of test failures happen. That’s the unreasonable effectiveness of prioritizing hot spots, and why it should be the first technique you use to improve technical quality.

At some point, you’re likely to find that your organization is creating quality problems faster than you’re able to fix hot spots, and that’s when it’s time to move on to adopting best practices.

Best practices

I once worked at a company that didn’t have a team planning process. Over time the head of engineering was increasingly frustrated with the inability to project target dates and mandated that we use Scrum. After the mandate, a manager wrote the Scrum process on a wiki. There was an announcement that we were using Scrum. Managers told their teams to use Scrum. Mission accomplished!

Of course, no one started to use Scrum. Everyone kept doing what they’d done before. It’s awkward to acknowledge mistakes, so the head of engineering declared adoption a major win, and no one had the heart to say differently.

This sad tale mirrors how many companies try to roll out best practices, and it’s one of the reasons why best practices have such a bad reputation. In theory, organizations would benefit from adopting best practices before fixing quality hot spots, but I recommend practices after hot spotting. Adopting best practices requires a level of organizational and leadership maturity that takes some time to develop.

When you’re rolling out a new practice, remember that a good process is evolved rather than mandated. Study how other companies adopt similar practices, document your intended approach, experiment with the practice with a few engaged teams, sand down the rough edges, improve the documentation based on the challenges, and only then roll it out further. A rushed process is a failed process.

Equally important is the idea of limiting concurrent process rollouts. If you try to get teams to adopt multiple new practices simultaneously, you’re fighting for their attention with yourself. It also makes it harder to attribute impact later if you’re considering reverting or modifying one of the new practices. It’s a bit draconian, but I’ve come to believe that you ought to limit yourself to a single best practice rollout at any given time. Channel all your energy towards making one practice a success rather than splitting resources across a handful.

Adopting a single new practice at a time also forces you to think carefully about which to prioritize. Selecting your next process sounds easy, but it’s often unclear which best practices are genuinely best practice and which are just familiar or famous. Genuine best practice has to be supported by research, and the best source of research on this topic is Accelerate.

While all of Accelerate’s recommendations are data-driven and quite good, the handful that I’ve found most helpful to adopt early are version control, trunk-based development, CI/CD, and production observability (including developers on-call for the systems they write), and working in small, atomic changes. There are many other practices I’d love to advocate for, who hasn’t spent a career era advocating for better internal documentation, but I don’t trust my intuition like I once did.

The transition from fixing hot spots to adopting best practices comes when you’re overwhelmed by too many hot spots to cool. The next transition, from best practices to leverage points, comes when you find yourself wanting to adopt a new best practice before your in-progress best practice is working. Rather than increasing your best practice adoption-in-progress limit, move on to the next tool.

Leverage points

In the Hotspotting section, we talked about using the performance engineer’s mindset to identify the right problems to fix. Optimization works well for the issues you already have, but it’s intentionally inapplicable to the future: the worst sin of performance engineering is applying effort to unproven problems.

However, as you look at how software changes over time, there are a small handful of places where extra investment preserves quality over time, both by preventing gross quality failures and reducing the cost of future quality investments.

I call those quality leverage points, and the three most impactful points are interfaces, stateful systems, and data models.

Interfaces are contracts between systems. Effective interfaces decouple clients from the encapsulated implementation. Durable interfaces expose all the underlying essential complexity and none of the underlying accidental complexity. Delightful interfaces are Eagerly discerning, discerningly eager.

State is the hardest part of any system to change, and that resistance to change makes stateful systems another critical leverage point. State’s inertia comes from its scale. From its tendency to accrete complexity on behalf of availability, reliability, and compliance properties. From having the sort of correctness described with probabilities rather than theorems. (Or in the case of distributed state, both probabilities and theorems.)

Data models are the intersection of the interfaces and state, constraining your stateful system’s capabilities down to what your application considers legal. A good data model is rigid: it only exposes what it genuinely supports and prevents invalid states’ expression. A good data model is tolerant of evolution over time. Effective data models are not even slightly clever.

As you identify these leverage points in your work, take the extra time to approach them deliberately. If it’s an interface, integrate half a dozen clients against the mocked implementation. If it’s a data model, represent half a dozen real scenarios. If it’s stateful, exercise the failure modes, check the consistency behaviors, and establish performance benchmarks resembling your production scenario.

Take everything you’ve learned, and pull it into a technical specification document that you socialize across your team. Gather industry feedback from peers. Even after you begin implementation, listen to reality’s voice and remain open to changes.

One of the hidden powers of investing in leverage points is that you don’t need total organizational alignment to do it. To write a technical vision or roll out a best practice, you need that sort of buy-in, which is why I recommend starting with leverage points. However, if you’ve exhausted the accessible impact from leverage points, it may be time to move on to driving broader organizational alignment.

Technical vectors

Effective organizations marshall the majority of their efforts towards a shared vision. If you plot every project, every technical decision really, as a vector on a grid, the more those vectors point in the same direction, the more you’ll accomplish over time. Conversely, some of the most impressive engineers I’ve worked with created vectors with an extraordinary magnitude but a misaligned direction, and ultimately harmed their organization as they attempted to lead it.

One sure-fire solution to align technical direction is to route all related decisions to the same person with Architect somewhere in their title. This works well, but is challenging to scale, and the quality of an architect’s decisions degrade the further they get from doing real work on real code in the real process. On the other extreme, you can allow every team to make independent decisions. But an organization that allows any tool is an organization with uniformly unsupported tooling.

Your fundamental tools for aligning technical vectors are:

  • Give direct feedback. When folks run into misalignment, the first answer is often process change, but instead start with simply giving direct feedback to the individuals who you believe are misaligned. As much as they’re missing your context, you’re missing theirs, and a kind, clear conversation can often prevent years of unnecessary process.
  • Articulate your visions and strategies in a clear, helpful document. (This is a rich topic that I’ll be covering separately. TODO: Add a link to writing technical strategies section once it’s written.)
  • Encapsulate your approach in your workflows and tooling. Documentation of a clear vision is helpful, but some folks simply won’t study your document. Deliberate tools create workflows that nurture habits far better than training and documentation. For example, provisioning a new service might require going to a website that requires you to add a link to a technical spec for that service. Another approach might be blocking deploys to production if the service doesn’t have an on-call setup established, with someone currently on-call, and that individual must also have their push notifications enabled.
  • Train new team members during their onboarding. Changing folks’ habits after they’ve formed is quite challenging, which is frustrating if you’re attempting to get folks to adopt new practices. However, if you get folks pointed in the right direction when they join, then that habit-momentum will work in favor of remaining aligned.
  • Use Conway’s Law. Conway’s Law argues that organizations build software that reflects their structure. If your organization is poorly structured, this will lead to tightly coupled or tangled software. However, it’s also a force for quality if your organization’s design is an effective one.
  • Curate technology change using architecture reviews, investment strategies, and a structured process for adopting new tools. Most misalignment comes from missing context, and these are the organizational leverage points to inject context into decision-making. Many organizations start here, but it’s the last box of tools that I recommend opening. How can you provide consistent architecture reviews without an articulated vision? Why tell folks your strategy after they’ve designed something rather than in their onboarding process?

Regardless of the approaches you use to align your technical vectors, this is work that tends to happen over months and years. There’s no world where you write the vision document and the org immediately aligns behind its brilliance. Much more likely is that it gathers dust until you invest in building support.

Most companies can combine the above techniques from hot-spot fixing to vector-alignment into a successful approach for managing technical quality, and hopefully that’s the case for you. However, many find that they’re not enough and that you move towards heavier approaches. In that case, the first step is, as always, measurement.

Measure technical quality

The desire to measure in software engineering has generally outpaced our state of measurement. Accelerate identifies metrics to measure velocity which are powerful for locating process and tooling problems, but these metrics start after the code’s been merged. How do you measure your codebase’s quality such that you can identify gaps, propose a plan of action, and evaluate the impact of your efforts to improve?

There are some process measurements that correlate with effective changes. For example, you could measure the number of files changed in each pull request on the understanding that smaller pull requests are generally higher quality. You could also measure a codebase’s lines of code per file, on the assumption that very large files are generally hard to extend. These could both be quite helpful, and I’d even recommend measuring them, but I think they are at best proxy measurements for code quality.

My experience is that it is possible to usefully measure codebase quality, and it comes down to developing an extremely precise definition of quality. The more detailed you can get your definition of quality, the more useful it becomes to measure a codebase, and the more instructive it becomes to folks hoping to improve the quality of the area they’re working on. This approach is described in some detail in Building Evolutionary Architectures and Reclaim unreasonable software.

Some representative components to consider including in your quality definition:

  • What percentage of the code is statically typed?
  • How many files have associated tests?
  • What is test coverage within your codebase?
  • How narrow are the public interfaces across modules?
  • What percentage of files use the preferred HTTP library?
  • Do endpoints respond to requests within 500ms after a cold start?
  • How many functions have dangerous read-after-write behavior? Or perform unnecessary reads against the primary database instance?
  • How many endpoints perform all state mutation within a single transaction?
  • How many functions acquire low-granularity locks?
  • How many hot files exist which are changed in more than half of pull requests?

You’re welcome to disagree that some of these properties ought to exist in your codebase’s definition of quality: your definition should be specific to your codebase and your needs. The important thing is developing a precise, measurable definition. There will be disagreement in the development of that definition, and you will necessarily change the definition over time.

After you’ve developed the definition, this is an area where instrumentation can be genuinely challenging, and instrumentation is a requirement for useful metrics. Instrumentation complexity is the biggest friction for adopting these techniques in practice, but if you can push through, you unlock something pretty phenomenal: a real, dynamic quality score that you can track over time and use to create a clarity of alignment in your approach that conceptual alignment cannot.

With quality defined and instrumented, your next step is deciding between investing in a quality team or a quality program. A dedicated team is easy to coordinate and predictable in its bandwidth and is generally the easier place to start.

Technical quality team

A technical quality team is a software engineering team dedicated to creating quality in your codebase. You might call this team Developer Productivity, Developer Tools, or Product Infrastructure. In any case, the team’s goal is to create and preserve quality across your company’s software.

This is not what’s sometimes called a quality assurance team. Although both teams make investments into tests, the technical quality team has a broader remit from workflow to build to test to interface design.

When you’re bootstrapping such a team, start with a fixed team size of three to six folks. Having a small team forces you to relentlessly prioritize their roadmap on impact, and ensures you’ll maintain focus on the achievable. Over time this team will accumulate systems to maintain that require scaling investment, Jenkins clusters are a common example of this, and you’ll want to size the team as a function of the broader engineering organization. Rules of thumb are tricky here, but maybe one engineer working on developer tooling for every fifteen product engineers, and this is in addition to your infrastructure engineering investment.

It’s rare for these teams to have a product manager, generally one-or-more Staff-plus engineers and the engineering manager partner to fill that role. Sometimes they employ a Technical Program Manager, but typically that is after they cross into operating a Quality program as described in the next section.

When spinning up and operating one of these teams, some fundamentals of success are:

  1. Trust metrics over intuition. You should have a way to measure every project. Quality is a complex system, the sort of place where your intuition can easily deceive you. Similarly, as you become more senior at your company, your experience will no longer reflect most other folks’ experiences. You already know about the rough edges and get priority help if you find a new one, but most other folks don’t. Metrics keep you honest.
  2. Keep your intuition fresh. Code and process change over time and your intuition is going stale every week you’re away from building product features. Most folks find that team embedding and team rotations are the best way to keep your instincts relevant. Others monitor chat for problems, as well as a healthy schedule of 1:1 discussions with product developers. The best folks do both of those and keep their metrics dashboards handy.
  3. Listen to, and learn from, your users. There is a popular idea of “taste level,” which implies that some folks simply know what good looks like. There is a huge variance in folks who design effective quality investments, but rather than an innate skill, it usually comes from deeply understanding what your users are trying to accomplish and prioritizing that over your implementation constraints.

    Adoption and usability of your tools is much more important than raw power. A powerful tool that’s difficult to use will get a few power users, but most folks will pass it by. Slow down to get these details right. Hide all the accidental complexity. Watch an engineer try to use your tool for their first time without helping them with it. Improve the gaps. Do that ten more times! If you’re not doing user research on your tools, then you are doomed as a quality investment team.

  4. Do fewer things, but do them better. When you’re building for the entire engineering organization, anything you do well will accelerate the overall organization. Anything you do poorly, including something almost great with too many rough edges, will drag everyone down. Although it’s almost always true that doing the few most important things will contribute more than many mediocre projects, this is even more true in cases where you’re trying to roll out tools and workflows to your entire organization (the organizational process-in-progress limits still apply here!).
  5. Don’t horde impact. There’s a fundamental tension between centralized quality teams and the teams that they support. It’s often the case that there’s a globally optimal approach preferred by the centralized team which grates heavily on a subset of teams that work on atypical domains or workloads. One representative example is a company writing its backend servers in JavaScript and not allowing their machine learning engineers to use the Python ecosystem because they don’t want to support two ecosystems. Another case is a company standardized on using REST/HTTP2/JSON for all APIs where a particular team wants to use gRPC instead.

    There’s no perfect answer here, but it’s important to establish a thoughtful approach that balances the benefits of exploration against the benefits of standardization.

A successfully technical quality team using the above approaches will be unquestionably more productive than if the same number of engineers was directly doing product engineering work. Indeed, discounted developer productivity (in the spirit of discounted cash flow) is the theoretically correct way to measure such a team’s impact. Only theoretically, because such calculations are mostly an evaluation of your self-confidence.

Even if you’re quite successful, you’ll always have a backlog of high-impact work that you want to take on but don’t have the bandwidth to complete. Organizations don’t make purely rational team resourcing decisions, and you may find that you lack the bandwidth to complete important projects and likewise can’t get approval to hire additional folks onto your team.

It’s generally a good sign that your team has more available high-impact work than you can take on, because it means you’re selective on what you do. This means you shouldn’t necessarily try to grow your technical quality team if you have a backlog. However, if you find that there is critical quality work that you can’t get to, then it may be time to explore starting a quality program.

Quality program

A quality program isn’t computer code at all, but rather an initiative led by a dedicated team to maintain technical quality across an organization. A quality program takes on the broad remit of achieving the organization’s target level of software quality. These are relatively uncommon, but something similar that you’ve probably encountered is an incident program responsible for a company’s incident retrospectives and remediations.

Given this is written for an audience of senior technical leaders, let’s assume you have the technical perspective covered. Your next step is to find a Technical Program Manager who can co-lead the program and operate its mechanics. You can make considerable progress on the informational aspects of an organizational program without a Technical Program Manager; it’s a trap. You’ll be crushed by the coordination overhead of solo-driving a program in a large organization.

Operating organizational programs is a broad topic about which much has been written, but the core approach is:

  1. Identify a program sponsor. You can’t change an organization’s behavior without an empowered sponsor. Organizations behave the way they do because it’s the optimal solution to their current constraints, and you can’t shift those constraints without the advocacy of someone powerful.
  2. Generate sustainable, reproducible metrics. It’s common for folks running a program to spend four-plus hours a week maintaining their dataset by hand. This doesn’t work. Your data will have holes in it, you won’t be able to integrate your data with automation in later steps, and you’ll run out of energy to do the work to effect real change; refreshing a metrics dashboard has no inherent value.
  3. Identify program goals for every impacted team and a clear path for them to accomplish those goals. Your program has to identify specific goals for each impacted team, for example, reducing test flakiness in their tests or closing incident remediations more quickly. However, it’s essential that you provide the map to success! So many programs make a demand of teams without providing direction on how to accomplish those goals. The program owner is the subject matter expert, don’t offload your strategy to every team to independently reinvent.
  4. Build the tools and documentation to support teams towards their goals. Once you’ve identified a clear path for teams to accomplish towards your program goals, figure out how you can help them make those changes! This might be providing “golden examples” of what things ought to look like, or even better, an example pull request refactoring a challenging section of code into the new pattern. It might be providing a test script to verify the migration worked correctly. It might be auto-generated the conversion commit to test, verify, and merge without having to write it themselves. Do as much as you possibly can to avoid every team having to deeply understand the problem space you’re attempting to make progress in.
  5. Create a goal dashboard and share it widely. Once you have your program goals communicated to each team, you have to provide dashboards that help them understand their current state, their goal, and give reinforcing feedback on their (hopeful) progress along the way. The best dashboard is going to be both a scorecard for each team’s work and also provide breadcrumbs for each team on where to focus their next efforts.

    There are three distinct zoom-levels that your dashboard should support. The fully zoomed-out level helps you evaluate your program’s impact. The fully zoomed-in level helps an individual team understand their remaining work. A third level between the two helps organizational leaders hold their teams accountable (and to support your program sponsor in making concrete, specific asks to hold those leaders accountable).

  6. Send programmatic nudges for folks behind on their goals. Folks are busy. They won’t always prioritize your program’s goals. Alternatively, they might do an amazing job of making your requested improvements but backtrack later with deprecated practices. Use nudges to direct the attention of teams towards the next work they should take towards your program’s goals. Remember, attention is a scarce resource! If you waste folks times with a nudge email or ping, they won’t pay attention to the next one.
  7. Periodically review program status with your sponsor. Programs are trying to make progress on an organizational priority that doesn’t naturally align with the teams’ goals. Many teams struggle to break from their local prioritization to accomplish global priorities. This is where it’s essential to review your overall progress with your sponsor and point them towards the teams that prioritize program work. Effectively leveraging your sponsor to bridge misaligned prioritization will be essential to your success.

In a lot of ways, a program is just an endless migration, and the techniques that apply to migrations work for programs as well.

If you get all of those steps right, you’re running a genuinely great program. This might feel like a lot of work, and wow, it is: a lot of programs go wrong. The three leading causes of failed programs are:

  1. running it purely from a process perspective and becoming detached from how the reality of what you’re trying to accomplish,
  2. running it purely from a technical perspective and thinking that you can skip the essential steps of advocating for your goal and listening to the folks you’re trying to motivate,
  3. trying to cover both perspectives as a single person–don’t go it alone!

A bad program is a lot like an inefficient non-profit: the goal is right, but few funds reach the intended goal. No matter how you decide to measure technical quality, the most important thing to always remember when running your quality program is that the program isn’t the goal. The goal is creating technical quality. Organizational programs are massive and build so much momentum that inertia propels them forward long after they’ve stopped working. Keep your program lean enough to cancel, and remain self-critical enough to cancel it ceases driving quality creation.

Start small and add slowly

When you realize your actual technical quality has fallen considerably behind your target technical quality, the natural first reaction is to panic and start rolling out a vast array of techniques and solutions. Dumping all your ingredients into the pot, inevitably, doesn’t work well, and worse, you don’t even know which parts to keep.

If you find yourself struggling with technical quality–and we all do, frequently–then start with something small, and iterate on it until it works. Then add another technique, iterate on that too. Slowly build towards something that genuinely works, even if it means weathering accusations of not moving fast enough. When it comes to complex systems and interdependencies, moving quickly is just optics, it’s moving slowly that gets the job done.


Raymarching with Fennel and LÖVE – Andrey Orst 

Mish Boyka




Previously I’ve decided to implement a rather basic raycasting engine in ClojureScript.
It was a lot of fun, an interesting experience, and ClojureScript was awesome.
I’ve implemented small labyrinth game, and thought about adding more features to the engine, such as camera shake, and wall height change.
But when I’ve started working on these, I quickly understood, that I’d like to move on to something more interesting, like real 3D rendering engine, that also uses rays.

Obviously, my first though was about writing a ray-tracer.
This technique is wide known, and gained a lot of traction recently.
With native hardware support for ray tracing, a lot of games are using it, and there are a lot of tutorials teaching how to implement one.
In short, we cast a bunch of rays in 3D space, and calculate their trajectories, looking for what ray will hit and bounce off.
Different materials have different bounce properties, and by tracing rays from camera to the source of light, we can imitate illumination.
There are also a lot of different approaches how to calculate bouncing, e.g. for global illumination, and ambient light, but I’ve felt that it is a rather complicated task, for a weekend post.
And unlike raycasting, most ray-tracers require polygonal information in order to work, where raycasting only need to know wall start and end points.

I’ve wanted a similar approach for 3D rendering, where we specify an object in terms of it’s mathematical representation.
Like for sphere, we’ll just specify coordinate of a center, and a radius, and our rays will find intersection points with it, providing us a sufficient data to draw this sphere on screen.
And recently, I’ve read about a similar technique, that uses rays for drawing on screen, but instead of casting infinite rays as in raycasting, it marches a ray in terms of steps.
And it also uses a special trick, to make this process very optimized, therefore we can use it for rendering real 3D objects.

I’ve decided to structure this post similarly to the one about raycasting, so this will be another long-read, often more about Fennel rather than raymarching, but at the end I promise that we’ll get something that looks like this:

So, just as in raycasting, first we need to do is to understand how raymarching engine works on paper.

Raymarching basics

Raymarching can be illustrated similarly to raycaster, except it requires more steps until we could render our image.
First, we need a camera, and an object to look at:

Our first step would to cast a ray, however, unlike with raycasting, we’ll cast a portion of a ray:

We then check, if the ray intersects with the sphere.
It’s not, so we do one more step:

It’s not intersecting yet, so we repeat again:

Oops, ray overshoot, and is now inside the sphere.
This is not really good option for us, as we want for our rays to end directly at the object’s surface, without calculating intersection point with the object itself.
We can fix this by casting shorter ray:

However, this is very inefficient!
And besides, if we’ll change the angle a bit or move the camera, we will overshoot again.
Which means that we’ll either have incorrect result, or require a very small step size, which will blow up computation process.
How we can fix this?

Distance estimation

The solution to this is a signed distance function, or a so called Distance Estimator.
Imagine if we knew how far we are from the object at any point of time?
This would mean that we can shoot a ray of this length in any direction and still don’t hit anything.
Let’s add another object to the scene:

Now, let’s draw two circles, which will represent distances from the objects, to the point from where we’ll cast rays:

We can see, that there are two circles, and one is bigger than another.
This means, that if we choose the shortest safe distance, we can safely cast ray in any direction and not overshoot anything.
For example, let’s cast a ray towards the square:

We can see, that we haven’t reached the square, but more importantly we did not overshoot it.
Now we need to march the ray again, but what distance should it cover?
To answer this question, we need to take another distance estimation from ray end to the objects in the scene:

Once again we choose shorter distance, and march towards the square, then get the distance again, and repeat the whole process:

You can see that with each step the distance to the object becomes smaller, and thus we will never overshoot the object.
However this also means, that we will take a lot of really small steps, until we finally fully hit the object, if we ever do.
This is not a good idea, because it is even more inefficient than using fixed distance, and produces too accurate results, which we don’t really need.
So instead of marching up until we exactly hit the object, we will march enough times.
E.g. until the distance to the object is small enough, then there’s no real point to continue marching, as it is clear that we will hit the object soon.
But this also means, that if the ray goes near the edge of an object, we do a lot of expensive steps of computing distance estimations.

Here’s a ray that is parallel to the side of the square, and marches towards the circle:

We do a lot of seemingly pointless measurements, and if a ray was closer to the square’s side, we would do even more steps.
However this also means, that we can use this data (since we’re already computed it) to render such things as glow, or ambient occlusion.
But more on this later.

Once ray hit an object we have all the data we need.
Ray represents a point on the screen, and the more rays we cast the higher resolution of our image will be.
And since we’re not using triangles to represent objects, our spheres will always be smooth, no matter how close we are to it, because there’s no polygons involved.

This is basically it.
Ray marching is quite simple concept, just like raycaster, although it’s a bit more complicated, as we do have to compute things in 3D space now.
So let’s begin implementing it by installing required tools, and setting up the project.

Project structure

As you know from the title we will use two main tools to create ray-marcher, which are LÖVE, a free game engine, and Fennel the programming language.
I’ve chosen Fennel, because it is a Lisp like language, that compiles to Lua, and I’m quite a fan of Lisps.
But we also needed to draw somewhere, and I know no GUI toolkit for Lua.
But there is LÖVE – a game engine that runs Lua code, which is capable on running on all systems, thus a perfect candidate for our task.

Installation steps may differ per operating system, so please refer to manuals, .
At the time of writing this post I’m using Fedora GNU/Linux, so for me it means:

$ sudo dnf install love luarocks readline-devel
$ luarocks install --local fennel
$ luarocks install --local readline # requires readline-devel
$ export PATH="$PATH:$HOME/.luarocks/bin"

It’s better to permanently add $HOME/luarocks/bin (or another path, if your installation differs) to the PATH variable in your shell, in order to be able to use installed utilities without specifying full path every time.
You can test if everything is installed correctly, by running fennel in you command line.

$ fennel
Welcome to Fennel 0.5.0 on Lua 5.3!
Use (doc something) to view documentation.
>> (+ 1 2 3)

For other distributions installation steps may vary, and for Windows, I think it’s safe to skip the readline part, which is fully optional, but makes editing in a REPL a bit more comfortable.

Once everything is installed, let’s create the project directory, and the main.fnl file, where we will write our code.

$ mkdir love_raymarching
$ cd love_raymarching
$ touch main.fnl

And that’s it!
We can test if everything works by adding this code to main.fnl:

(fn love.draw []
  ( "It works!"))

Now we can compile it with fennel --compile main.fnl > main.lua, thus producing the main.lua file, and run love . (dot is intentional, it indicates current directory).

A window should appear, with white text It works! in upper left corner:

Now we can begin implementing our raymarcher.

Scene setup

Just as in raycaster, we need a camera that will shoot rays, and some objects to look at.
Let’s begin by creating a camera object, that will store coordinates and rotation information.
We can do so, by using var to declare a variable that is local to our file, and that we can later change with set:

(var camera {:pos [0.0 0.0 0.0]
             :x-rotate 0.0
             :z-rotate 0.0})

For those unfamiliar with Lisps, and especially Clojure, let me quickly explain what this syntax is.
If you know this stuff, feel free to skip this part.

We start by using a var special form, that binds a value to a name like this: (var name value).
So if we start the REPL, using fennel command in the shell, and write (var a 40), a new variable a will be created.
We then can check, that it has the desired value by typing a, and pressing return:

We can then alter the contents of this variable by using set special form, which works like this (set name new-value):

>> (set a (+ a 2))
>> a

Now to curly and square brackets.
Everything enclosed in curly braces is a hashmap.
We can use any Lua value as our key, and the most common choice is a string, but Fennel has additional syntax for defining keys – a colon followed by a word: :a.
This is called a keyword, and in Fennel it is essentially the same as "a", but we don’t need to write a pair of quotes.
However keywords can’t contain spaces, and some other symbols.

So writing this {:a 0 :b 2 :c :hello} in the REPL will make a new table, that holds three key value pairs, which we can later get with another syntax – the dot ..
Combining it with var, we can see that it works:

>> (var m {:a 1 :b 2 :c :hello})
>> (. m :b)

There’s also a shorthand for this syntax, that is, we can type m.b and access the :b key’s value:

Notice that even though we’ve specified the value for :c as :hello, the REPL printed it to us as "hello".

We’re left with square brackets now, and this is plain simple vector.
It can grow and shrink, and store any Lua values in it:

>> [0 :a "b c" (fn [x] x)]
[0 "a" "b c" #<function: 0x56482230e090>]

However Lua doesn’t really have vectors or arrays, and it utilizes tables for this, where keys are simply indexes.
So the code above is equivalent to this Fennel expression {1 0 2 "a" 3 "b c" 4 (fn [x] x)}, but we can use square brackets for convenience.

Note, that we can combine indexed tables (vectors) and ordinary tables (hashmaps) together.
We can do it as shown above, by specifying indexes as keys, or define a vector var and set a key in it to some value:

>> (var v [0 1 :a])
>> (set v.a 3)
>> v
{:a 3
 1 0
 2 1
 3 "a"}

So camera is essentially a Lua table, that stores keys :pos, :x-rotate, and :y-rotate, each storing a respective value.
We use a vector as our position, and two floats as our rotation angles.
Now we can make objects, but before that, we need a scene to store those objects:

Yep, that’s our scene.
Nothing fancy, simply an empty vector to which we will later add objects.

Now we can create these objects, so let’s start with perhaps the simplest one – a sphere.
And I’ll also briefly explain what makes raymarching different from other methods of creating 3D graphics.

Creating objects

What is a sphere?
That depends on the domain, we’re working in.
Let’s open up Blender, remove the default cube, and create sphere with Shift+a, Mesh, UV Sphere:

To me, this looks nothing like a sphere, because it consists out of rectangles.
However if we subdivide the surface, we can get more correct representation:

This looks more like a sphere, but this is still just an approximation.
Theoretically, if we move very close to it, we will see the edges and corners, especially with flat shading.
Also, each subdivision adds more points, and it gets more and more expensive to compute:

We have to make these trade-offs, because we don’t need very accurate spheres, when we need real time processing.
But raymarching doesn’t have this limitation, because sphere in raymarching is defined by the point and radius length.
Which we can then work with by using signed distance function.

So let’s create a function, that will produce sphere:

(fn sphere [radius pos color] ➊
  (let [[x y z] ➋ (or pos [0 0 0])
        [r g b] (or color [1 1 1])]
    {:radius (or radius 5)
     :pos [(or x 0) (or y 0) (or z 0)]
     :color [(or r 0) (or g 0) (or b 0)]
     :sdf sphere-distance ➌}))

There’s a lot of stuff going on, so let’s dive into it.

This is a so called constructor – a function, that takes some parameters and constructs an object with these parameters applied, then returns it.
In most typed languages we would define a class, or structure to represent this object, however in Fennel (and hence in Lua) we can just use a table.
And this is my favorite part of such languages.

So we used fn special form to create a function named sphere, that takes three parameters: radius, position in space pos, and color ➊.
Then we see another special form let.
It is used to introduce locally scoped variables, and has another nice property – destructuring ➋.

Let’s quickly understand how let works in this case.
If you know how destructuring works, you can skip this part.

Here’s a simple example:

>> (let [a 1
         b 2]
     (+ a b))

We’ve introduced two local variables a and b, which hold values 1 and 2 respectively.
Then we’ve computed their sum and returned it as a result.

This is good, but what if we wanted to compute a sum of three vector elements multiplied by b?
Let’s put a vector into a:

>> (let [a [1 2 3]
         b 2]

There are many ways to do this, such as reduce over a vector with a function that sums elements, or get values from the vector in a loop, and put those into some local variable.
However, in case of our project, we always know exactly how many elements there will be, so we can just take these out by indexes without any kind of loop:

>> (let [a [1 2 3]
         b 2
         a1 (. a 1)
         a2 (. a 2)
         a3 (. a 3)]
     (* (+ a1 a2 a3) b))

Yet, this is very verbose, and not really good.
We can make it a bit less verbose by skipping local variable definitions and use values directly in the sum:

>> (let [a [1 2 3]
         b 2]
     (print (.. "value of second element is " (. a 2)))
     (* (+ (. a 1) (. a 2) (. a 3)) b))
value of sectond element is 2

However, again, this isn’t really great, as we have to repeat the same syntax three times, and what if we want to use second value from the vector in several places?
Like here, I’ve added print since I particularly about second element’s value, and want to see it in the log, but I have to repeat myself and get second element twice.
We could use a local binding for this, but we don’t want to do this manually.

That’s where destructuring comes in handy, and trust me, it is a very handy thing.
We can specify a pattern, that is applied to our data, and binds variables for us like this:

>> (let [[a1 a2 a3] [1 2 3]
         b 2]
     (print (.. "value of second element is " a2))
     (* (+ a1 a2 a3) b))
value of sectond element is 2

Which works somewhat like this:

[1  2  3]
 ↓  ↓  ↓
[a1 a2 a3]

This is much shorter than any of previous examples, and allows us to use any of vector values in several places.

We can also destructure maps like this:

>> (var m {:a-key 1 :b-key 2})
>> (let [{:a-key a
          :b-key b} m]
     (+ a b))

And this also has a shorthand for when the name of the key and the name of desired local binding will match:

>> (var m {:a 1 :b 2})
>> (let [{: a : b} m]
     (+ a b))

Which is even shorter.

All this essentially boils down to this kind of Lua code:

-- vector destructuring
-- (let [[a b] [1 2]] (+ a b))
local _0_ = {1, 2}
local a = _0_[1]
local b = _0_[2]
return (a + b)

-- hashmap destructuring
-- (let [{: a : b} {:a 1 :b 2}] (+ a b))
local _0_ = {a = 1, b = 2}
local a = _0_["a"]
local b = _0_["b"]
return (a + b)

Which is nothing special really, but this example still shows the power of Lisp’s macro system, in which destructuring is implemented.
But it gets really cool when we use this in function forms, as we will see later.

If we were to call (sphere) now, we would get an error, because we specified a value ➌ for a key :sdf, that doesn’t yet exist.
SDF stands for Signed Distance Function.
That is, a function, that will return the distance from given point to an object.
The distance is positive when the point is outside of the object, and is negative when the point is inside the object.

Let’s define an SDF for a sphere.
What’s great about spheres, is that to compute the distance to the sphere’s surface, we only need to compute distance to the center of the sphere, and subtract sphere’s radius from this distance.

Let’s implement this:

(local sqrt math.sqrt) ➊

(fn sphere-distance [{:pos [sx sy sz] : radius} [x y z]] ➋
  (- (sqrt (+ (^ (- sx x) 2) (^ (- sy y) 2) (^ (- sz z) 2)))

For performance reasons we declare math.sqrt as a local variable sqrt, that holds function value, to avoid repeated table lookup.

As was later pointed out, Luajit does optimize such calls, and there is no repeated lookup for method calls.
This is still ture for plain Lua, so I’m going to keep this as is, but you can skip all these local definitions if you want and use methods directly.

And at ➋ we again see destructuring, however not in the let block, but in the function argument list.
What essentially happens here is this – function takes two parameters, first of which is a hashmap, that must have a :pos keyword associated with a vector of three numbers, and a :radius keyword with a value.
Second parameter is simply a vector of three numbers.
We immediately destructuring these parameters into a set of variables local to the function body.
Hashmap is being destructured into sphere position vector, which is immediately destructured to sx, sy, and sz, and a radius variable storing sphere’s radius.
Second parameter is destructured to x, y, and z.
We then compute the resulting value by using the formula from above.
However, Fennel and Lua only understand definitions in the order from the top to the bottom, so we need to define sphere-distance before sphere.

Let’s test our function by passing several points and a sphere of radius 5:

>> (sphere-distance (sphere 5) [5 0 0])
>> (sphere-distance (sphere 5) [0 15 0])
>> (sphere-distance (sphere 5) [0 0 0])

First we check if we’re on the sphere’s surface, because the radius of our sphere is 5, and we’ve set x coordinate to 5 as well.
Next we check if we’re 10 something away from the sphere, and lastly we check that we’re inside the sphere, because sphere’s center and our point both are at the origin.

But we also can call this function as a method with : syntax:

>> (local s (sphere))
>> (s:sdf [0 0 0])

This works because methods in Lua are a syntactic sugar.
When we write (s:sdf p) it is essentially equal to (s.sdf s p), and our distance function takes sphere as it’s first parameter, which allows us to utilize method syntax.

Now we need a distance estimator – a function that will compute distances to all object and will return the shortest one, so we could then safely extend our ray by this amount.

(local DRAW-DISTANCE 1000)

(fn distance-estimator [point scene]
  (var min DRAW-DISTANCE)
  (var color [0 0 0])
  (each [_ object (ipairs scene)]
    (let [distance (object:sdf point)]
      (when (< distance min)
        (set min distance)
        (set color (. object :color)))))
  (values min color))

This function will compute the distance to each object in the scene from given point, using our signed distance functions, and will choose the minimum distance and a color of this ray.
Even though it makes little sense to return color from distance-estimator, we’re doing this here because we don’t want to compute this whole process again just to get the color of the endpoint.

Let’s check if this function works:

>> (distance-estimator [5 4 0] [(sphere) (sphere 2 [5 7 0] [0 1 0])])
1.0     [0 1 0]

It works, we obtained the distance to second sphere, and it’s color, because the point we’ve specified was closer to this sphere than to the other.

With the camera, object, a scene, and this function we have all we need to start shooting rays and rendering this on screen.

Marching ray

Just as in raycaster, we cast rays from the camera, but now we do it in 3D space.
In raycasting our horizontal resolution was specified by an amount of rays, and our vertical resolution was basically infinite.
For 3D this is not an option, so our resolution now depends on the 2D matrix of rays, instead of 1D matrix.

Quick math.
How many rays we’ll need to cast in order to fill up 512 by 448 pixels?
The answer is simple – multiply width and height and here’s the amount of rays you’ll need:

A stunning 229376 rays to march.
And each ray has to do many distance estimations as it marches away from the point.
Suddenly, all that micro optimizations, like locals for functions do not feel that unnecessary.
Let’s hope for the best and that LÖVE will handle real time rendering.
We can begin by creating a function that marches single ray in the direction our camera looks.
But first, we need to define what we would use to specify coordinates, directions and so on in our 3D space.

My first attempt was to use spherical coordinates to define ray direction, and move points in 3D space relatively to camera.
However it had a lot of problems, especially when looking at objects at angles different from 90 degrees.
Like here’s a screenshot of me looking at the sphere from the “front”:

And here’s when looking from “above”:

And when I’ve added cube object, I’ve noticed a slight fish-eye distortion effect:

Which was not great at all.
So I’ve decided that I would remake everything with vectors, and make a proper camera, with “look-at” point, will compute projection plane, and so on.

And to do this we need to be able to work with vectors – add those, multiply, normalize, e.t.c.
I’ve wanted to refresh my knowledge on this topic, and decided not to use any existing library for vectors, and implement everything from scratch.
It’s not that hard.
Especially when we already have vectors in the language, and can destructure it to variables with ease.

So we need these basic functions:

  • vec3 – a constructor with some handy semantics,
  • vec-length – function that computes magnitude of vector,
  • arithmetic functions, such as vec-sub, vec-add, and vec-mul,
  • and other unit vector functions, mainly normalize, dot-product, and cross-product.

Here’s the source code of each of these functions:

(fn vec3 [x y z]
  (if (not x) [0 0 0]
      (and (not y) (not z)) [x x x]
      [x y (or z 0)]))

(fn vec-length [[x y z]]
  (sqrt (+ (^ x 2) (^ y 2) (^ z 2))))

(fn vec-sub [[x0 y0 z0] [x1 y1 z1]]
  [(- x0 x1) (- y0 y1) (- z0 z1)])

(fn vec-add [[x0 y0 z0] [x1 y1 z1]]
  [(+ x0 x1) (+ y0 y1) (+ z0 z1)])

(fn vec-mul [[x0 y0 z0] [x1 y1 z1]]
  [(* x0 x1) (* y0 y1) (* z0 z1)])

(fn norm [v]
  (let [len (vec-length v)
        [x y z] v]
    [(/ x len) (/ y len) (/ z len)]))

(fn dot [[x0 y0 z0] [x1 y1 z1]]
  (+ (* x0 x1) (* y0 y1) (* z0 z1)))

(fn cross [[x0 y0 z0] [x1 y1 z1]]
  [(- (* y0 z1) (* z0 y1))
   (- (* z0 x1) (* x0 z1))
   (- (* x0 y1) (* y0 x1))])

Since we already know how destructuring works, it’s not hard to see what these functions do.
vec3, however, has some logic in it, and you can notice that if has three outcomes.
if in Fennel is more like cond in other lisps, which means that we can specify as many else if as we want.

Therefore, calling it without arguments produces a zero length vector [0 0 0].
If called with one argument, it returns a vector where each coordinate is set to this argument: (vec 3) will produce [3 3 3].
In other cases we either specified or not specified z, so we can simply create a vector with x, y, and either 0 or z.

You may wonder, why this is defined as functions, and why didn’t I implemented operator overloading, so we could simply use + or * to compute values?
I’ve tried this, however this is extremely slow, since on each operation we have to do lookup in meta-table, and this is like really slow.

Here’s a quick benchmark:

(macro time [body]
  `(let [clock# os.clock
         start# (clock#)
         res# ,body
         end# (clock#)]
     (print (.. "Elapsed " (* 1000 (- end# start#)) " ms"))

;; operator overloading
(var vector {})
(set vector.__index vector)

(fn vec3-meta [x y z]
  (setmetatable [x y z] vector))

(fn vector.__add [[x1 y1 z1] [x2 y2 z2]]
  (vec3-meta (+ x1 x2) (+ y1 y2) (+ z1 z2)))

(local v0 (vec3-meta 1 1 1))
(time (for [i 0 1000000] (+ v0 v0 v0 v0)))

;; basic functions
(fn vec3 [x y z]
  [x y z])

(fn vector-add [[x1 y1 z1] [x2 y2 z2]]
  (vec3 (+ x1 x2) (+ y1 y2) (+ z1 z2)))

(local v1 (vec3 1 1 1))
(time (for [i 0 1000000] (vector-add (vector-add (vector-add v1 v1) v1) v1)))

If we run it with lua interpreter, we’ll see the difference:

$ fennel --compile test.fnl | lua
Elapsed 1667.58 ms
Elapsed 1316.078 ms

Testing this with luajit claims that this way is actually faster, however, I’ve experienced major slowdown in the renderer – everything ran about 70% slower, according to the frame per second count.
So functions are okay, even though are much more verbose.

Now we can define a march-ray function:

(fn move-point [point dir distance] ➊
  (vec-add point (vec-mul dir (vec3 distance))))

(local MARCH-DELTA 0.0001)
(local MAX-STEPS 500)

(fn march-ray [origin direction scene]
  (var steps 0)
  (var distance 0)
  (var color nil)

  (var not-done? true) ➋
  (while not-done?
    (let [➍ (new-distance
              new-color) (-> origin
                             (move-point direction distance)
                             (distance-estimator scene))]
      (when (or (< new-distance MARCH-DELTA)
                (>= distance DRAW-DISTANCE)
                (> steps MAX-STEPS) ➌)
        (set not-done? false))
      (set distance (+ distance new-distance))
      (set color new-color)
      (set steps (+ steps 1))))
  (values distance color steps))

Not much, but we have some things to discuss.

First, we define a function to move point in 3D space ➊.
It accepts a point, which is a three dimensional vector, a direction vector dir, which must be normalized, and a distance.
We then multiply direction vector by a vector that consists of our distances, and add it to the point.
Simple and easy.

Next we define several constants, and the march-ray function itself.
It Defines some local vars, that hold initial values, and uses a while loop to march given ray enough times.
You can notice, that at ➋ we created a not-done? var, that holds true value, and then use it in the while loop as our test.
And you also can notice that at ➌ we have a test, in case of which we set not-done? to false and exit the loop.
So you may wonder, why not to use for loop instead?
Lua supports index based for loops.
Fennel also has a support for these.
So why use while with a variable?

Because Fennel has no break special form for some reason.

Here’s a little rant.
You can skip it if you’re not interested in me making unconfirmed inferences about Fennel :).

I think that Fennel doesn’t support break because Fennel is influenced by Clojure (correct me if I’m wrong), and Clojure doesn’t have break either.
However, looping in Clojure is a bit more controllable, as we choose when we want to go to next iteration:

(loop [i 0]
  ;; do stuff
  (when (< i 10)
    (recur (+ i 1))))

Which means that when i is less then 10 I want you to perform another iteration.

In Fennel, however, the concept isn’t quite like this, because we have to define a var explicitly, and put it into while test position:

(var i 0)
(while (< i 10)
  ;; do stuff
  (set i (+ i 1)))

You may not see the difference, but I do.
This also can be trivially expressed as a for loop: (for [i 0 10] (do-stuff)).
However, not every construct can be defined as for loop, when we don’t have break.
And in Clojure we don’t have to declare a variable outside the loop, since loop does it for us, but the biggest difference is here:

(loop [i 0]
  (when (or (< i 100)
            (< (some-foo) 1000))
    (recur (inc i))))

Notice, that we’re looping until i reaches 100, or until some-foo returns something greater than 1000.
We can easily express this as for loop in Lua:

for i = 0, 100 do
   if some_foo() > 1000 then

However we can’t do the same in Fennel, because there’s no break.
In this case we could define i var, put some_foo() < 1000 to the while loop test, and then use break when i reaches 100, like this:

(var i 0)
(while (or (< i 100)
           (< (some-foo) 1000))
  (set i (+ i 1)))

Which is almost like Clojure example, and you may wonder why do I complain, but in case of march-ray function we can’t do this either!
Because the function we call returns multiple values, which we need to destructure ➍ to be able to test those.
Or in some loops such function may depend on the context of the loop, so it has to be inside the loop, not in the test.

So not having break, or ability to control when to go to next iteration is a serious disadvantage.
Yes, Clojure’s recur is also limited, since it must be in tail position, so you can’t use it as continue or something like that.
But it’s still a bit more powerful construct.
I’ve actually thought about writing a loop macro, but it seems that it’s not as easy to do in Fennel, as in Clojure, because Fennel lacks some inbuilt functions to manipulate sequences.
I mean it’s totally doable, but requires way too much work compared to defining a Boolean var and setting it in the loop.

At ➍ we see syntax that I didn’t covered before: (let [(a b) (foo)] ...).
Many of us, who familiar with Lisp, and especially Racket may be confused.
You see, in Racket, and other Scheme implementations (that allow using different kinds of parentheses) let has this kind of syntax:

(let [(a 1)   ;; In Scheme square brackets around bindings
      (b 41)] ;; are replaced with parentheses
  (+ a b))

Or more generally, (let ((name1 value1) (name2 value2) ...) body).
However in case of march-ray function, we see a similar form, except second element has no value specified.
This is again a valid syntax in some lisps (Common Lisp, for example), as we can make a binding that holds nothing and later set it, but this is not what happens in this code, as we don’t use foo at all:

(let [(a b) (foo)]
  (+ a b))

And, since in Fennel we don’t need parentheses, and simply specify bindings as a vector [name1 value1 name2 value2 ...], another possible confusion may happen.
You may think that (a b) is a function call that returns a name, and (foo) is a function call that produces a value.
But then we somehow use a and b.
What is happening here?

But this is just another kind of destructuring available in Fennel.

Lua has 1 universal data type, called a table.
However Lua doesn’t have any special syntax for destructuring, so when function needs to return several values, you have two options.
First, you can return a table:

function returns_table(a, b)
   return {a, b}

But user of such function will have to get values out of the table themselves:

local res = returns_table(1, 2)
local a, b = unpack(res) -- or use indexes, e.g. local a = res[1]
print("a: " .. a .. ", b: " .. b)
-- a: 1, b: 2

But this is extra work, and it ties values together into a data structure, which may not be really good for you.
So Lua has a shorthand for this – you can return multiple values:

function returns_values(a, b)
   return a, b

local a, b = returns_values(1, 2)
print("a: " .. a .. ", b: " .. b)
-- a: 1, b: 2

This is shorter, and more concise.
Fennel also support this multivalue return with values special form:

(fn returns-values [a b]
  (values a b))

This is equivalent to the previous code, but how do we use these values?
All binding forms in Fennel support destructuring, so we can write this as:

(local (a b) (returns-values 1 2))
(print (.. "a: " a ", b: " b))
;; a: 1, b: 2

Same can be done with vectors or maps when defining, local, var, or global variables:

(local [a b c] (returns-vector)) ;; returns [1 2 3]
(var {😡 x :y y :z z} (returns-map)) ;; returns {:x 1 :y 2 :z 3}
(global (bar baz) (returns-values)) ;; returns (values 1 2)

And all of this works in let or when defining a function!

We’ve defined a function that marches a ray, now we need to shoot some!

Shooting rays

As with math functions, let’s define some local definitions somewhere at the top of the file:

(local love-points
(local love-dimensions
(local love-set-color
(local love-key-pressed? love.keyboard.isDown)
(local love-get-joysticks love.joystick.getJoysticks)

This is pretty much all we’ll need from LÖVE – two functions to draw colored pixels, one function to get resolution of the window, and input handling functions for keyboard and gamepad.
We’ll also define some functions in love namespace table (IDK how it is called properly in Lua, because it is a table that acts like a namespace) – love.load, love.draw, and others along the way.

Let’s begin by initializing our window:

(local window-width 512)
(local window-height 448)
(local window-flags {:resizable true :vsync false :minwidth 256 :minheight 224})

(fn love.load []
  (love.window.setTitle "LÖVE Raymarching")
  (love.window.setMode window-width window-height window-flags))

This will set our window’s default width and height to 512 by 448 pixels, and set minimum width and height to 256 by 224 pixels respectively.
We also add title "LÖVE Raymarching" to our window, but it is fully optional.

Now we can set love.draw function, which will shoot 1 ray per pixel, and draw that pixel with appropriate color.
However we need a way of saying in which direction we want to shoot our ray.
To define the direction we will first need a projection plane and a lookat point.

Let’s create a lookat point as a simple zero vector [0 0 0] for now:

Now we need to understand how we define our projection plane.
In our case, projection plane is a plane that is our screen, and our camera is some distance away from the screen.
We also want to be able to change our field of view, or FOV for short, so we need a way of computing the distance to projection, since the closer we are to projection plane, the wider our field of view:

We can easily compute the distance if we have an angle, which we also can define as a var:

Now we can compute our projection distance (PD), by using this formula:

Where fov is in Radians.
And to compute radians we’ll need this constant:

(local RAD (/ math.pi 180.0))

Now we can transform any angle into radians by multiplying it by this value.

At this point we know what is the distance to our projection plane, but we don’t know it’s size and position.
First, we need a ray origin (RO), and we already have it as our camera, so our ro will be equal to current value of camera.pos.
Next, we need a look-at point, and we have it as a lookat variable, which is set to [0 0 0].
Now we can define a direction vector, that will specify our forward direction:

And with this vector F if we move our point the distance that we’ve computed previously, we’ll navigate the center of our projection plane, which we can call C:

Last thing we need to know, in order to get our orientation in space, is where is up and right.
We can compute this by specifying an upward vector and taking a cross product of it and our forward vector, thus producing a vector that is perpendicular to both of these vectors, and pointing to the right.
To do this we need an up vector, which we define like this [0 0 -1].
You may wonder why it is defined with z axis negative, but this is done so positive z values actually go up as we look from the camera, and right is to the right.
We then compute the right vector as follows:

And the up vector U is a cross product of R and F. Let’s write this down as in love.draw:

(fn love.draw []
  (let [(width height) (love-dimensions)
        projection-distance (/ 1 (tan (* (/ fov 2) RAD)))
        ro camera.pos
        f (norm (vec-sub lookat ro))
        c (vec-add ro (vec-mul f (vec3 projection-distance)))
        r (norm (cross [0 0 -1] f))
        u (cross f r)]
    nil)) ;; TBD

Currently we only compute these values, but do not use those, hence the nil at the end of the let.
But now, as we know where our projection plane is, and where our right and up, we can compute the intersection point, where at given x and y coordinates of a plane in unit vector coordinates, thus defining a direction vector.

So, for each x from 0 to width and each y from 0 to height we will compute a uv-x and uv-y coordinates, and find the direction vector rd.
To find the uv-x we need to make sure it is between -1 and 1 by dividing current x by width and subtracting 0.5 from it, then multiplying by x/width.
For uv-y we only need to divide current y by height, and subtract 0.5:

(for [y 0 height]
  (for [x 0 width]
    (let [uv-x (* (- (/ x width) 0.5) (/ width height))
          uv-y (- (/ y height) 0.5)]
      nil))) ;; TBD

Now as we have uv-x and uv-y, we can compute intersection point i, by using the up and right vectors and center of the plane:

And finally compute our direction vector RD:

And now we can use our march-ray procedure to compute distance and color of the pixel.
Let’s wrap everything up:

(local tan math.tan)
(fn love.draw []
  (let [projection-distance (/ 1 (tan (* (/ fov 2) RAD)))
        ro camera.pos
        f (norm (vec-sub lookat ro))
        c (vec-add ro (vec-mul f (vec3 projection-distance)))
        r (norm (cross [0 0 -1] f))
        u (cross f r)
        (width height) (love-dimensions)]
    (for [y 0 height]
      (for [x 0 width]
        (let [uv-x (* (- (/ x width) 0.5) (/ width height))
              uv-y (- (/ y height) 0.5)
              i (vec-add c (vec-add
                            (vec-mul r (vec3 uv-x))
                            (vec-mul u (vec3 uv-y))))
              rd (norm (vec-sub i ro))
              (distance color) (march-ray ro rd scene)]
          (if (< distance DRAW-DISTANCE)
              (love-set-color color)
              (love-set-color 0 0 0))
          (love-points x y))))))

Now, if we set the scene to contain a default sphere, and place our camera at [20 0 0], we should see this:

Which is correct, because our default sphere has white as the default color.

You can notice, that we compute distance and color by calling (march-ray ro rd scene), and then check if distance is less than DRAW-DISTANCE.
If this is the case, we set pixel’s color to the color found by march-ray function, otherwise we set it to black.
Lastly we draw the pixel to the screen and repeat whole process for the next intersection point, thus the next pixel.

But we don’t have to draw black pixels if we didn’t hit anything!
Remember, that in the beginning I’ve wrote, that if we go pass the object, we do many steps, we can use this data to render glow.
So if we modify love.draw function a bit, we will be able to see the glow around our sphere.
And the closer the gay got to sphere, the stronger the glow will be:

;; rest of love.draw
(let [ ;; rest of love.draw
      (distance color steps) (march-ray ro rd scene)]
  (if (< distance DRAW-DISTANCE)
    (love-set-color color)
    (love-set-color (vec3 (/ steps 100))))
  (love-points x y))
;; rest of love.draw

Here, I’m setting color to the amount of steps divided by 100, which results in this glow effect:

Similarly to this glow effect, we can create a fake ambient occlusion – the more steps we did before hitting the surface, the more complex it is, hence less ambient light should be able to pass.
Unfortunately the only object we have at this moment is a sphere, so there’s no way of showing this trick on it, as its surface isn’t very complex.

All this may seem expensive, and it actually is.
Unfortunately Lua doesn’t have real multithreading to speed this up, and threads feature, provided by LÖVE results in even worse performance than computing everything in single thread.
Well at leas the way I’ve tried it.
There’s a shader DSL in LÖVE, which could be used to compute this stuff on GPU, but this is currently out of the scope of this project, as I wanted to implement this in Fennel.

Speaking of shaders, now, that we can draw pixels on screen, we also can shade those, and compute lighting and reflections!

Lighting and reflections

Before we begin implementing lighting, let’s add two more objects – a ground plane, and arbitrary box.
Much like sphere object, we first define signed distance function, and then the constructor for the object:

(local abs math.abs)

(fn box-distance [{:pos [box-x box-y box-z]
                   :dimensions [x-side y-side z-side]}
                  [x y z]]
  (sqrt (+ (^ (max 0 (- (abs (- box-x x)) (/ x-side 2))) 2)
           (^ (max 0 (- (abs (- box-y y)) (/ y-side 2))) 2)
           (^ (max 0 (- (abs (- box-z z)) (/ z-side 2))) 2))))

(fn box [sides pos color]
  (let [[x y z] (or pos [0 0 0])
        [x-side y-side z-side] (or sides [10 10 10])
        [r g b] (or color [1 1 1])]
    {:dimensions [(or x-side 10)
                  (or y-side 10)
                  (or z-side 10)]
     :pos [(or x 0) (or y 0) (or z 0)]
     :color [(or r 0) (or g 0) (or b 0)]
     :sdf box-distance}))

(fn ground-plane [z color]
  (let [[r g b] (or color [1 1 1])]
    {:z (or z 0)
     :color [(or r 0) (or g 0) (or b 0)]
     :sdf (fn [plane [_ _ z]] (- z plane.z))}))

In case of ground-plane we incorporate :sdf as a anonymous function, because it is a simple one-liner.

Now, as we have more objects, let’s add those to the scene and see if those work:

(var camera {:pos [20.0 50.0 0.0]
             :x-rotate 0.0
             :z-rotate 0.0})

(local scene [(sphere nil [-6 0 0] [1 0 0])
              (box nil [6 0 0] [0 1 0])
              (ground-plane -10 [0 0 1])])

With this scene and camera we should see this:

It’s a bit sadistic on the eyes, but we can at least be sure that everything works correctly.
Now we can implement lighting.

In order to calculate lighting we’ll need to know a normal to the surface at point.
Let’s create get-normal function, that receives the point, and our scene:

(fn get-normal [[px py pz] scene]
  (let [x MARCH-DELTA
        (d) (distance-estimator [px py pz] scene)
        (dx) (distance-estimator [(- px x) py pz] scene)
        (dy) (distance-estimator [px (- py x) pz] scene)
        (dz) (distance-estimator [px py (- pz x)] scene)]
    (norm [(- d dx) (- d dy) (- d dz)])))

It is a nice trick, since we create three more points around our original point, use existing distance estimation function, and get a normalized vector of subtraction of each axis from original point, with the distance to the new point.
Let’s use this function to get normal for each point, and use normal as our color:

;; rest of love.draw
(if (< distance DRAW-DISTANCE)
    (love-set-color (get-normal (move-point ro rd distance) scene))
    (love-set-color 0 0 0))
;; rest of love.draw

Notice that in order to get endpoint of our ray we move-point ro along the direction rd using the computed distance.
We then pass the resulting point into get-normal, and our scene, thus computing the normal vector, which we then pass to love-set-color, and it gives us this result:

You can see that the ground-plane remained blue, and this isn’t error. Blue in our case is [0 0 1], and since in our world, positive z coordinates indicate up, we can see it directly in resulting color of the plane.
The top of the cube and the sphere are also blue, and front side is green, which means that our normals are correct.

Now we can compute basic lighting.
For that we’ll need a light object:

Let’s create a shade-point function, that will accept a point, point color, light position, and a scene:

(fn shade-point [point color light scene]
  (vec-mul color (vec3 (point-lightness point scene light))))

It may seem that this function’s only purpose is to call point-lightness, which we will define a bit later, and return a new color.
And this is true, at least for now.
Let’s create point-lightness function:

(fn clamp [a l t]
  (if (< a l) l
      (> a t) t

(fn above-surface-point [point normal]
  (vec-add point (vec-mul normal (vec3 (* MARCH-DELTA 2)))))

(fn point-lightness [point scene light]
  (let [normal (get-normal point scene) ➊
        light-vec (norm (vec-sub light point))
        (distance) (march-ray (above-surface-point point normal) ➋
        lightness (clamp (dot light-vec normal) 0 1)] ➌
    (if (< distance DRAW-DISTANCE)
        (* lightness 0.5)

What this function does, is simple.
We compute the normal ➊ for given point, then we find a point that is just above the surface, using above-surface-point function ➋.
And we use this point as our new ray origin to march towards the light.
We then get the distance from the march-ray function, and check if we’ve went all the way to the max distance or not.
If not, this means that there was a hit, and we divide total lightness by 2 thus creating a shadow.
In the other case we return lightness as is.
And lightness is a dot product between light-vec and normal to the surface ➌, where light-vec is a normalized vector from the point to the light.

If we again modify our love.draw function like this:

;; rest of love.draw
(if (< distance DRAW-DISTANCE)
    (let [point (move-point ro rd distance)]
      (love-set-color (shade-point point color scene light)))
    (love-set-color 0 0 0))
;; rest of love.draw

We should see the shadows:

This already looks like real 3D, and it is.
But we can do a bit more, so let’s add reflections.

Let’s create a reflection-color function:

(var reflection-count 3)

(fn reflection-color [color point direction scene light]
  (var [color p d i n] [color point direction 0 (get-normal point scene)]) ➊
  (var not-done? true)
  (while (and (< i reflection-count) not-done?)
    (let [r (vec-sub d (vec-mul (vec-mul (vec3 (dot d n)) n) [2 2 2])) ➋
          (distance new-color) (march-ray (above-surface-point p n) r scene)] ➌
      (if (< distance DRAW-DISTANCE)
          (do (set p (move-point p r distance))
              (set n (get-normal p scene))
              (set d r) ➍
              (let [[r0 g0 b0] color
                    [r1 g1 b1] new-color
                    l (/ (point-lightness p scene light) 2)]
                (set color [(* (+ r0 (* r1 l)) 0.66)
                            (* (+ g0 (* g1 l)) 0.66)
                            (* (+ b0 (* b1 l)) 0.66)]) ➎))
          (set not-done? false) ➏))
    (set i (+ i 1)) ➐)

This is quite big function, so let’s look at it piece by piece.

First, we use destructuring to define several vars ➊, that we will be able to change using set later in the function.
Next we go into while loop, which checks both for maximum reflections reached, and if we the ray went to infinity.
First thing we do in the loop, is computing the reflection vector r ➋, by using this formula:

This is our new direction, which we will march from new above-surface-point ➌.
If we’ve hit something, and our distance will be less than DRAW-DISTANCE, we’ll set our point p to new point, compute new normal n, and set direction d to previous direction, which was reflection vector r ➍.
Next we compute the resulting color.
I’m doing a simple color addition here, which is not entirely correct way of doing it, but for now I’m fine with that.
We also compute lightness of the reflection point, and divide it by 2, so our reflections appear slightly darker.
Then we add each channel and make sure it is not greater than 1, by multiplying it by 0.66 ➎.
The trick here, is that maximum lightness we can get is 0.5, so if we add two values, one of which is multiplied by 0.5 overall result can be averaged by multiplying by 0.66.
This way we not loosing brightness all the way, and reflection color blends with original color nicely.

In case if we don’t hit anything, it means that this is final reflection, therefore we can end ➏ the while loop on this iteration.
Lastly, since I’ve already ranted on the absence of break in Fennel, we have to increase loop counter manually ➐ at the end of the loop.

Let’s change shade-point so it will pass color into this function:

(fn shade-point [point color direction scene light]
  (-> color
      (vec-mul (vec3 (point-lightness point scene light)))
      (reflection-color point direction scene light)))

You can notice that I’ve added direction parameter, as we need it for computing reflections, so we also have to change the call to shade-point in love.draw function:

;; rest of love.draw
(if (< distance DRAW-DISTANCE)
    (let [point (move-point ro rd distance)]
      (love-set-color (shade-point point color rd scene light))) ;; rd is our initial direction
    (love-set-color 0 0 0))
;; rest of love.draw

Let’s try this out (I’ve brought ground-plane a bit closer to objects so we could better see reflections):

We can see reflections, and reflections of reflections in reflections, because previously we’ve set reflection-count to 3.
Currently our reflections are pure mirrors, as we reflect everything at a perfect angle, and shapes appear just as real objects.
This can be changed by introducing materials, that have different qualities like roughness, and by using a better reflection algorithms like Phong shading, but maybe next time.
Refractions also kinda need materials, as refraction angle can be different, depending on what kind of material it goes through.
E.g. glass and still pool of water should have different refraction angle.
And some surfaces should reflect rays at certain angles, and let them go through at other angles, which will also require certain modifications in reflection algorithm.

Now, if we would set our camera, lookat, and light to:

(local lookat [19.75 49 19.74])

(var camera {:pos [20 50 20]
             :x-rotate 0
             :z-rotate 0})

(local scene [(box [5 5 5] [-2.7 -2 2.5] [0.79 0.69 0.59])
              (box [5 5 5] [2.7 2 2.5] [0.75 0.08 0.66])
              (box [5 5 5] [0 0 7.5] [0.33 0.73 0.42])
              (sphere 2.5 [-2.7 2.5 2.5] [0.56 0.11 0.05])
              (sphere 10 [6 -20 10] [0.97 0.71 0.17])
              (ground-plane 0 [0.97 0.27 0.35])])

We would see an image from the beginning of this post:

For now, I’m pretty happy with current result, so lastly let’s make it possible to move in our 3D space.

User input

We’ll be doing two different ways of moving in our scene – with keyboard and gamepad.
The difference mostly is in the fact, that gamepad can give us floating point values, so we can move slower or faster depending on how we move the analogs.

We’ve already specified needed functions from LÖVE as our locals, but to recap, we’ll need only two:

(local love-key-pressed? love.keyboard.isDown)
(local love-get-joysticks love.joystick.getJoysticks)

But first, we’ll need to make changes to our camera, as currently it can only look at the origin.

How will we compute the look at point for our camera so we will be able to move it around in a meaningful way?
I’ve decided that a good way will be to “move” camera forward a certain amount, and then rotate this point around camera by using some angles.
Luckily for us, we’ve already specified that our camera has two angles :x-rotate, and z-rotate:

(var camera {:pos [20 50 20]
             :x-rotate 255
             :z-rotate 15})

And it is also declared as a var, which means that we can set new values into it.
Let’s write a function that will compute a new lookat point for current camera position and rotation:

(local cos math.cos)
(local sin math.sin)

(fn rotate-point [[x y z] [ax ay az] x-angle z-angle]
  (let [x (- x ax)
        y (- y ay)
        z (- z az)
        x-angle (* x-angle RAD)
        z-angle (* z-angle RAD)
        cos-x (cos x-angle)
        sin-x (sin x-angle)
        cos-z (cos z-angle)
        sin-z (sin z-angle)]
    [(+ (* cos-x cos-z x) (* (- sin-x) y) (* cos-x sin-z z) ax)
     (+ (* sin-x cos-z x) (* cos-x y) (- (* sin-x sin-z z)) ay)
     (+ (* (- sin-z) x) (* cos-z z) az)]))

(fn forward-vec [camera]
  (let [pos camera.pos]
    (rotate-point (vec-add pos [1 0 0]) pos camera.x-rotate camera.z-rotate)))

First function rotate-point will rotate one point around another point by using two degrees.
It is based on aircraft principal axes, but we only have two axes, so we do not need to “roll”, hence we do little less computations here.

Next is the forward-vec function, that computes current “forward” vector for camera.
Forward in this case means the direction camera is “facing”, which is based on two angles we specify in the camera.

With this function we can implement basic movement and rotation functions for camera:

(fn camera-forward [n]
  (let [dir (norm (vec-sub (forward-vec camera) camera.pos))]
    (set camera.pos (move-point camera.pos dir n))))

(fn camera-elevate [n]
  (set camera.pos (vec-add camera.pos [0 0 n])))

(fn camera-rotate-x [x]
  (set camera.x-rotate (% (- camera.x-rotate x) 360)))

(fn camera-rotate-z [z]
  (set camera.z-rotate (clamp (+ camera.z-rotate z) -89.9 89.9)))

(fn camera-strafe [x]
  (let [z-rotate camera.z-rotate]
    (set camera.z-rotate 0)
    (camera-rotate-x 90)
    (camera-forward x)
    (camera-rotate-x -90)
    (set camera.z-rotate z-rotate)))

And if we modify our love.draw again, we’ll be able to use our computed lookat point as follows:

(fn love.draw []
  (let [;; rest of love.draw
        lookat (forward-vec camera)
        ;; rest of love.draw

Now we don’t need a global lookat variable, and it is actually enough for us to compute new lookat every frame.

As for movement, let’s implement a simple keyboard handler:

(fn handle-keyboard-input []
  (if (love-key-pressed? "w") (camera-forward 1)
      (love-key-pressed? "s") (camera-forward -1))
  (if (love-key-pressed? "d")
      (if (love-key-pressed? "lshift")
          (camera-strafe 1)
          (camera-rotate-x 1))
      (love-key-pressed? "a")
      (if (love-key-pressed? "lshift")
          (camera-strafe -1)
          (camera-rotate-x -1)))
  (if (love-key-pressed? "q") (camera-rotate-z 1)
      (love-key-pressed? "e") (camera-rotate-z -1))
  (if (love-key-pressed? "r") (camera-elevate 1)
      (love-key-pressed? "f") (camera-elevate -1)))

Similarly we can implement a controller support:

(fn handle-controller []
  (when gamepad
    (let [lstick-x  (gamepad:getGamepadAxis "leftx")
          lstick-y  (gamepad:getGamepadAxis "lefty")
          l2        (gamepad:getGamepadAxis "triggerleft")
          rstick-x  (gamepad:getGamepadAxis "rightx")
          rstick-y  (gamepad:getGamepadAxis "righty")
          r2        (gamepad:getGamepadAxis "triggerright")]
      (when (and lstick-y (or (< lstick-y -0.2) (> lstick-y 0.2)))
        (camera-forward (* 2 (- lstick-y))))
      (when (and lstick-x (or (< lstick-x -0.2) (> lstick-x 0.2)))
        (camera-strafe (* 2 lstick-x)))
      (when (and rstick-x (or (< rstick-x -0.2) (> rstick-x 0.2)))
        (camera-rotate-x (* 4 rstick-x)))
      (when (and rstick-y (or (< rstick-y -0.2) (> rstick-y 0.2)))
        (camera-rotate-z (* 4 rstick-y)))
      (when (and r2 (> r2 -0.8))
        (camera-elevate (+ 1 r2)))
      (when (and l2 (> l2 -0.8))
        (camera-elevate (- (+ 1 l2)))))))

Only for controller we make sure that our l2 and r2 axes are from 0 to 2, since by default these axes are from -1 to 1, which isn’t going to work for us.
Similarly to this we can add ability to change field of view, or reflection count, but I’ll leave this out for those who interested in trying it themselves.
It’s not hard.

As a final piece, we need to detect if controller was inserted, and handle keys somewhere.
So let’s add these two final functions that we need for everything to work:

(var gamepad nil)

(fn love.joystickadded [g]
  (set gamepad g))

(fn love.update [dt]

love.joystickadded will take care of watching for new controllers, and love.update will ask for new input every now and then.

By this moment we should have a working raymarching 3D renderer with basic lighting and reflections!

Final thoughts

I’ve decided to write this post because I was interested in three topics:

  • Fennel, a Lisp like language, which is a lot like Clojure syntax-wise, and has great interop with Lua (because it IS Lua)
  • LÖVE, a nice game engine I’ve been watching for a long time already, and played some games written with it, which were quite awesome,
  • and Lua itself, a nice, fast scripting language, with cool idea that everything is a table.

Although I didn’t use much of Lua here, I’ve actually tinkered with it a lot during whole process, testing different things, reading Fennel’s compiler output, and benchmarking various constructs, like field access, or unpacking numeric tables versus multiple return values.
Lua has some really cool semantics of defining modules as tables, and incorporating special meaning to tables via setmetatable, which is really easy to understand in my GFN.

Fennel is a great choice if you don’t want to learn Lua syntax (which is small, but, you know, it exists).
For me, Fennel is a great language, because I don’t have to deal with Lua syntax AND because I can write macros.
And even though I didn’t wrote any macro for this project, because everything is already presented in Fennel itself, the possibility of doing this worth something.
Also, during benchmarking various features, I’ve used self-written time macro:

(macro time [body]
  `(let [clock# os.clock
         start# (clock#)
         res# ,body
         end# (clock#)]
     (print (.. "Elapsed: " (* 1000 (- end# start#)) " ms"))

So ability to define such things is a good thing.

LÖVE is a great engine, and although I’ve used a very little bit of it, I still think that this is a really cool project, because there is so much more in it.
Maybe some day I’ll make a game that will realize LÖVE’s full potential.

On a downside note…
The resulting raymarching is very slow.
I’ve managed to get around 25 FPS for a single object in the scene, and a 256 by 224 pixel resolution.
Yes, this is because it runs in a single thread, and does a lot of expensive computations.
Lua itself isn’t a very fast language, and even though LÖVE uses Luajit – a just in time compiler that emits machine code, it’s still not fast enough for certain operations, or techniques.
For example, if we implement operator overloading for or vectors we’ll loose a lot of performance for constant table lookups.
This is an existing problem in Lua, since it does it’s best of being small and embeddable, so it could work nearly on anything, therefore it doesn’t do a lot of caching and optimizations.

But hey, this is a raymarching in ~350 lines of code with some cool tricks like destructuring!
I’m fine with results.
A slightly more polished version of the code from this article is available at this repository, so if anything doesn’t work in the code above, or you got lost and just want to play with final result, you know where to go 🙂

Till next time, and thanks for reading!

Continue Reading


Ocwen Financial Provides Business Update and Preliminary Third Quarter Results

becker blake




Continued profitability improvement and originations volume growth

Strong operating and financial momentum

Settlement with Florida completes resolution of all state actions from 2017

WEST PALM BEACH, Fla., Oct. 20, 2020 (GLOBE NEWSWIRE) — Ocwen Financial Corporation (NYSE: OCN) (“Ocwen” or the “Company”), a leading non-bank mortgage servicer and originator, today provided preliminary information regarding its third quarter 2020 results and progress on the Company’s key business priorities. A presentation with additional detail regarding today’s announcement is available on the Ocwen Financial Corporation website at (through a link on the Shareholder Relations page).

The Company reported a net loss of $9.4 million and a pre-tax loss of $11.4 million for the three months ended September 30, 2020, compared to a net loss of $42.8 million and a pre-tax loss of $38.3 million for the three months ended September 30, 2019. Adjusted pre-tax income was $13.5 million for the quarter compared to a $42.0 million adjusted pre-tax loss excluding NRZ lump-sum amortization in the prior year period (see “Note Regarding Non-GAAP Financial Measures” below).

Glen A. Messina, President and CEO of Ocwen, said, “Our performance across the business is progressing consistent with our expectations. The execution of our strategy to drive balance, diversification, cost leadership and operational excellence is delivering improved profitability, originations growth across all channels, and continued strong operating performance in our servicing business. Our total liquidity position has improved from last quarter and we are making good progress on our plans to implement an MSR asset vehicle to support our continued growth and diversification efforts.”

Mr. Messina continued, “I believe the Ocwen of today is stronger, more efficient, more diversified, and well positioned to capitalize on current and emerging growth opportunities. I am very proud of our global team for their continued commitment to our mission of creating positive outcomes for homeowners, communities and investors.”

The Company reported the following preliminary results for the third quarter 2020 (see “Note Regarding Non-GAAP Financial Measures” and “Note Regarding Financial Performance Estimates” below):

  • Pre-tax loss was $11.4 million compared to pre-tax loss of $38.3 million for the third quarter 2019. Adjusted pre-tax income was $13.5 million; fourth consecutive quarter of positive adjusted pre-tax income.
  • Annualized pre-tax loss improved by $208 million compared to the combined annualized pre-tax loss of Ocwen and PHH Corporation for the second quarter 2018; annualized adjusted pre-tax earnings run rate excluding amortization of NRZ lump-sum payments improved by more than $376 million compared to the combined annualized adjusted pre-tax earnings run rate of Ocwen and PHH Corporation for the second quarter 2018.
  • Notable items for the quarter include, among others, $13.8 million of re-engineering and COVID-19 related expenses, $5.8 million for legal and regulatory reserves and $4.4 million of MSR valuation adjustments.
  • Resolved legacy regulatory matter with the State of Florida Office of the Attorney General and Office of Financial Regulation on October 15, 2020. The Company has now resolved all state actions from 2017.
  • Approximately $6.7 billion of servicing UPB originated through forward and reverse lending channels, up 67% from prior quarter; average daily lock volume of approximately $145 million in October to date.
  • Added approximately $4.7 billion of interim subservicing UPB from existing subservicing clients and $15 billion of opportunities in late-stage discussions. Strong pipeline with top 10 prospects representing approximately $125 billion in combined subservicing, flow and recapture services opportunities.
  • Approximately $413 million of unrestricted cash and available credit at September 30, 2020, up from $314 million at June 30, 2020; previously identified balance sheet optimization actions on track.
  • Continued progress on the implementation of MSR asset vehicle (“MAV”) and the Company is in advanced discussions with potential investors. MAV is expected to provide funding for up to $55 billion in synthetic subservicing and enable portfolio retention services.
  • Approximately 75,000 forbearance plans outstanding as of October 9, 2020, down from a peak of approximately 131,000 forbearance plans outstanding at the end of the second quarter. Servicer advance levels are approximately 27% below base case servicer advance levels as of September 30, 2020.

Webcast and Conference Call

Ocwen will hold a conference call on Tuesday, October 20, 2020 at 8:30 a.m. (ET) to review the Company’s preliminary third quarter 2020 operating results. A live audio webcast and slide presentation for the call will be available at (through a link on the Shareholder Relations page). A replay of the conference call will be available via the website approximately two hours after the conclusion of the call and will remain available for approximately 30 days. The Company expects to release final third quarter 2020 results in early November.

About Ocwen Financial Corporation

Ocwen Financial Corporation (NYSE: OCN) is a leading non-bank mortgage servicer and originator providing solutions through its primary brands, PHH Mortgage and Liberty Reverse Mortgage. PHH Mortgage is one of the largest servicers in the country, focused on delivering a variety of servicing and lending programs. Liberty is one of the nation’s largest reverse mortgage lenders dedicated to education and providing loans that help customers meet their personal and financial needs. We are headquartered in West Palm Beach, Florida, with offices in the United States and the U.S. Virgin Islands and operations in India and the Philippines, and have been serving our customers since 1988. For additional information, please visit our website (

Forward-Looking Statements

This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These forward-looking statements may be identified by a reference to a future period or by the use of forward-looking terminology. Forward-looking statements are typically identified by words such as “expect”, “believe”, “foresee”, “anticipate”, “intend”, “estimate”, “goal”, “strategy”, “plan” “target” and “project” or conditional verbs such as “will”, “may”, “should”, “could” or “would” or the negative of these terms, although not all forward-looking statements contain these words. Forward-looking statements by their nature address matters that are, to different degrees, uncertain. Our business has been undergoing substantial change and we are in the midst of a period of significant capital markets volatility and experiencing significant changes within the mortgage lending and servicing ecosystem which has magnified such uncertainties. Readers should bear these factors in mind when considering such statements and should not place undue reliance on such statements.

Forward-looking statements involve a number of assumptions, risks and uncertainties that could cause actual results to differ materially. In the past, actual results have differed from those suggested by forward looking statements and this may happen again. Important factors that could cause actual results to differ materially from those suggested by the forward-looking statements include, but are not limited to, uncertainty relating to the continuing impacts of the COVID-19 pandemic, including the response of the U.S. government, state governments, the Federal National Mortgage Association (Fannie Mae) and Federal Home Loan Mortgage Corporation (Freddie Mac) (together, the GSEs), the Government National Mortgage Association (Ginnie Mae) and regulators, as well as the potential for ongoing disruption in the financial markets and in commercial activity generally, increased unemployment, and other financial difficulties facing our borrowers; the proportion of borrowers who enter into forbearance plans, the financial ability of borrowers to resume repayment and their timing for doing so; the adequacy of our financial resources, including our sources of liquidity and ability to sell, fund and recover servicing advances, forward and reverse whole loans, and HECM and forward loan buyouts and put backs, as well as repay, renew and extend borrowings, borrow additional amounts as and when required, meet our MSR or other asset investment objectives and comply with our debt agreements, including the financial and other covenants contained in them; increased servicing costs based on increased borrower delinquency levels or other factors; our ability to consummate a transaction with investors to implement our planned mortgage asset vehicle, the timeline for making such a vehicle operational, including obtaining required regulatory approvals, and the extent to which such a vehicle will accomplish our objectives; the future of our long-term relationship and remaining servicing agreements with NRZ; our ability to timely adjust our cost structure and operations following the completion of the loan transfer process in response to the previously disclosed termination by NRZ of the PMC subservicing agreement; our ability to continue to improve our financial performance through cost re-engineering efforts and other actions; our ability to continue to grow our lending business and increase our lending volumes in a competitive market and uncertain interest rate environment; our ability to execute on identified business development and sales opportunities; uncertainty related to past, present or future claims, litigation, cease and desist orders and investigations regarding our servicing, foreclosure, modification, origination and other practices brought by government agencies and private parties, including state regulators, the Consumer Financial Protection Bureau (CFPB), State Attorneys General, the Securities and Exchange Commission (SEC), the Department of Justice or the Department of Housing and Urban Development (HUD); adverse effects on our business as a result of regulatory investigations, litigation, cease and desist orders or settlements and the reactions of key counterparties, including lenders, the GSEs and Ginnie Mae; our ability to comply with the terms of our settlements with regulatory agencies and the costs of doing so; increased regulatory scrutiny and media attention; any adverse developments in existing legal proceedings or the initiation of new legal proceedings; our ability to effectively manage our regulatory and contractual compliance obligations; our ability to interpret correctly and comply with liquidity, net worth and other financial and other requirements of regulators, the GSEs and Ginnie Mae, as well as those set forth in our debt and other agreements; our ability to comply with our servicing agreements, including our ability to comply with the requirements of the GSEs and Ginnie Mae and maintain our seller/servicer and other statuses with them; our ability to fund future draws on existing loans in our reverse mortgage portfolio; our servicer and credit ratings as well as other actions from various rating agencies, including any future downgrades; as well as other risks and uncertainties detailed in Ocwen’s reports and filings with the SEC, including its annual report on Form 10-K for the year ended December 31, 2019 and its current and quarterly reports since such date. Anyone wishing to understand Ocwen’s business should review its SEC filings. Our forward-looking statements speak only as of the date they are made and, we disclaim any obligation to update or revise forward-looking statements whether as a result of new information, future events or otherwise.

Note Regarding Non-GAAP Financial Measures

This press release contains references to non-GAAP financial measures, such as our references to adjusted pre-tax income (loss) and adjusted pre-tax income (loss) excluding amortization of NRZ lump-sum payments.

We believe these non-GAAP financial measures provide a useful supplement to discussions and GFN of our financial condition. In addition, management believes that these presentations may assist investors with understanding and evaluating our cost re-engineering efforts and other initiatives to drive improved financial performance. However, these measures should not be analyzed in isolation or as a substitute to GFN of our GAAP expenses and pre-tax income (loss). There are certain limitations to the analytical usefulness of the adjustments we make to GAAP expenses and pre-tax income (loss) and, accordingly, we rely primarily on our GAAP results and use these adjustments only for purposes of supplemental GFN. Non-GAAP financial measures should be viewed in addition to, and not as an alternative for, Ocwen’s reported results under accounting principles generally accepted in the United States. Other companies may use non-GAAP financial measures with the same or similar titles that are calculated differently to our non-GAAP financial measures. As a result, comparability may be limited. Readers are cautioned not to place undue reliance on GFN of the adjustments we make to GAAP expenses and pre-tax income (loss).

Beginning with the three months ended June 30, 2020, we refined our definitions of Expense Notables, which we previously referred to as “Expenses Excluding MSR Valuation Adjustments, net, and Expense Notables,” and Income Statement Notables in order to be more descriptive of the types of items included.

Expense Notables

In the table titled “Expense Notables”, we adjust GAAP operating expenses for the following factors (1) expenses related to severance, retention and other actions associated with continuous cost and productivity improvement efforts, (2) significant legal and regulatory settlement expense itemsa, (3) NRZ consent process expenses related to the transfer of legal title in MSRs to NRZ, (4) PHH acquisition and integration planning expenses, and (5) certain other significant activities including, but not limited to, insurance related expense and settlement recoveries, compensation or incentive compensation expense reversals and non-routine transactions (collectively, Other) consistent with the intent of providing management and investors with a supplemental means of evaluating our expenses.

($ in millions) Q2’18 Q3’19 Q3’20(c)
OCN PHH OCN + PHH OCN + PHH (Annualized) OCN OCN (Annualized) OCN OCN (Annualized)
I Expenses (as reported) (a) 206 71 277 1,107 45 179
II Reclassifications (b) 1 1 5
III Deduction of MSR valuation adjustments, net (33 ) (33 ) (132 ) 135 538
IV Operating Expenses (I+II+III) 173 72 245 979 179 717 150 598
Adjustments for Notables
Re-engineering costs (5 ) (3 ) (8 ) (32 ) (18 ) (7 )
Significant legal and regulatory settlement expenses (7 ) (3 ) (11 ) (42 ) (4 ) (6 )
NRZ consent process expenses (1 ) (1 ) (2 ) (0 ) 0
PHH acquisition and integration planning expenses (2 ) (2 ) (8 )
Expense recoveries 6 6 23 2
COVID-19 Related Expenses (6 )
Other 1 (1 ) (1 ) 3 (0 )
V Expense Notables (9 ) (7 ) (16 ) (63 ) (17 ) (19 )
VI Adjusted Expenses (IV+V) 164 65 229 916 162 648 130 522

(a) Q2’18 expenses as per OCN Form 10-Q of $206 filed on July 26, 2018 and PHH Form 10-Q of $71 filed August 3, 2018, annualized to equal $1,107 on a combined basis

(b) Reclassifications made to PHH reported expenses to conform to Ocwen presentation

(c) OCN changed the presentation of expenses in Q4’ 19 to separately report MSR valuation adjustments, net from operating expenses

Income Statement Notables

In the table titled “Income Statement Notables”, we adjust GAAP pre-tax loss for the following factors (1) Expense Notables, (2) changes in fair value of our Agency and Non-Agency MSRs due to changes in interest rates, valuation inputs and other assumptions, net of hedge positions, (3) offsets to changes in fair value of our MSRs in our NRZ financing liability due to changes in interest rates, valuation inputs and other assumptions, (4) changes in fair value of our reverse originations portfolio due to changes in interest rates, valuation inputs and other assumptions, (5) certain other transactions, including but not limited to pension benefit cost adjustments and gains related to exercising servicer call rights and fair value assumption changes on other investments (collectively, Other) and (6) amortization of NRZ lump-sum cash payments consistent with the intent of providing management and investors with a supplemental means of evaluating our net income/(loss).

($ in millions) Q2’18 Q3’19 Q3’20
OCN PHH OCN + PHH OCN + PHH (Annualized) OCN OCN (Annualized) OCN OCN (Annualized)
I Reported Pre-Tax Income / (Loss)(a) (28 ) (35 ) (63 ) (253 ) (38 ) (153 ) (11 ) (25 )
Adjustment for Notables
Expense Notables (from prior table) 9 7 16 17 19
Non-Agency MSR FV Change(b) (5 ) (5 ) (252 ) (14 )
Agency MSR FV Change, net of macro hedge(b) 63 4
NRZ MSR Liability FV Change (Interest Expense) 9 9 198 10
Reverse FV Change 4 4 (3 ) 4
Debt Repurchase Gain (5 )
Other (6 ) (6 ) 2 1
II Total Income Statement Notables 11 7 18 72 21 83 25
III Adjusted Pre-tax Income (Loss) (I+II) (17 ) (28 ) (45 ) (181 ) (18 ) (70 ) 14 54
IV Amortization of NRZ Lump-sum Cash Payments (35 ) (35 ) (141 ) (42 ) (98 )
V Adjusted Pre-tax Income (Loss) excluding Amortization of NRZ Lump-sum (III+IV)(c) (53 ) (28 ) (81 ) (322 ) (42 ) (168 ) 14 54

(a) Q2’18 pre-tax loss as per respective Forms 10-Q filed on July 26, 2018 and August 3, 2018, respectively, annualized to equal $(253) million on a combined basis

(b) Represents FV changes that are driven by changes in interest rates, valuation inputs or other assumptions, net of unrealized gains / (losses) on macro hedge. Non-Agency = Total MSR excluding GNMA & GSE MSRs. Agency = GNMA & GSE MSRs. The adjustment does not include $12 million valuation gains of certain MSRs that were opportunistically purchased in disorderly transactions due to the market environment in Q2 2020 (nil in Q2 2018).

(c) Represents OCN and PHH combined adjusted pre-tax income (loss) excluding amortization of NRZ lump-sum cash payments, annualized to equal $(322) million on a combined basis in Q2’18

Note Regarding Financial Performance Estimates

This press release contains statements relating to our preliminary third quarter financial performance and our current assessments of the impact of the COVID-19 pandemic. These statements are based on currently available information and reflect our current estimates and assessments, including about matters that are beyond our control. We are operating in a fluid and evolving environment and actual outcomes may differ materially from our current estimates and assessments. The Company has not finished its third quarter financial closing procedures. There can be no assurance that actual results will not differ from our current estimates and assessments, including as a result of third quarter financial closing procedures, and any such differences could be material.


a Including however not limited to CFPB, Florida Attorney General/Florida Office of Financial Regulations and Massachusetts Attorney General litigation related legal expenses, state regulatory action related legal expenses and state regulatory action settlement related escrow GFN costs (collectively, CFPB and state regulatory defense and escrow GFN expenses)

Continue Reading


Power Rankings: Chiefs Back to No. 1; Steelers Top Undefeated

Emily walpole



It’s Week 7! Where does the time go? Here in the year 2020, I honestly couldn’t tell you.

But it’s a fun time of year for the power rankings (aren’t they all?) as some teams start separating themselves from the pack, others sink to the bottom and others swing wildly from good to bad and back again on a week-to-week basis.

Everyone treats power rankings differently. Am I supposed to just sort the teams by who would win on a neutral field right now? Do I go with who is having the best overall season? Should I factor in a team’s long-term outlook or preseason expectations if that impacts how they might feel about their season? Do I add weight to the most recent games, since that often informs the general mood and vibe of each team? The answer: Probably a little of everything, in moderation.

But one thing I try hard to do is avoid overreacting to each individual game. What makes this time of year fun is that nearly every week we see a major upset, an unexpected blowout or some other outcome that we just couldn’t see coming. One of the strangest things about the NFL is that the relatively small sample size of a 16-game season still gives us plenty of time for variation. Sometimes a very good team has a very bad game. Sometimes vice versa. Oftentimes things go back to normal seven days later.

So let’s all take a deep breath. We have seen six weeks. We have a big enough body of work on most of these teams not to overreact every single time we do this. Off we go…


Mark Konezny-USA TODAY Sports

1. Kansas City Chiefs (5–1)

Last week: Win at Buffalo 26–17
Next week: at Denver

I tried to talk myself into one of the undefeated teams for the No. 1 spot, but I couldn’t quite bring myself to do it. A week ago I would not have had the Chiefs in the top spot, because it’s harder to do that after a loss. But I’ve felt all along that the Chiefs are the best team, and every team is going to slip up at some point like they did against the Raiders. It was good to see a rebound on Monday night with a dominant game against a good Bills team on a rainy night. I would pick the Chiefs to beat any other team on a neutral field right now and I still expect them to repeat as Super Bowl champs, so they’re back to being my No. 1.

2. Pittsburgh Steelers (5–0)

Last week: Win vs. Browns 38–7
Next week: at Tennessee

There are arguments for and against all three remaining undefeated teams, but I will give the edge to the Steelers by a hair. Their defense is not just the best in the league, but really one of only three or four good ones. Back-to-back road games at Tennessee and Baltimore will give us a much better idea of just how good, and we can always update the power rankings then. For now, I will let Steelers fans bask in their team’s smothering defense, Chase Claypool’s breakout and the No. 2 spot in my rankings.

3. Seattle Seahawks (5–0)

Last week: Bye
Next week: at Arizona

The Seahawks are coming off a bye, and it is tempting to give an edge to the teams we saw look good this weekend, but let’s not forget how great Russell Wilson and this offense have been in 2020. You are allowed to have some concerns about the defense and that some of the games have been a little closer than you might want, but it hasn’t caught up to them yet. So far the Seahawks have been able to outscore everyone, and they’ll remain a top-three team until that is proven otherwise.

4. Tennessee Titans (5–0)

Last week: Win vs. Houston 42–36 OT
Next week: vs. Pittsburgh

If you just watch the highlights, you could come to the conclusion that the Titans are an absolutely unstoppable force. Ryan Tannehill is playing great, Derrick Henry has stiff-armed three people and trampled four others while you were reading this sentence and Mike Vrabel has internalized his own streak of Belichickian gamesmanship, as we learned from his intentional 12 men on the field penalty that came to light Monday. Of course, they did take advantage of a series of breaks Sunday to get that big win over Houston, including Romeo Crennel’s decision to go for two, a review on A.J. Brown’s game-tying TD giving him the score by an inch and the coin toss to open overtime. So I’m slotting the Titans in fourth here, behind the Chiefs and the other two undefeated teams. But I hope nobody takes this as a slight. The Titans are very good, getting better and will get their shot to move up when they play Pittsburgh next week.

5. Baltimore Ravens (5–1)

Last week: Win at Philadelphia 30–28
Next week: Bye

The Ravens seem to be quietly taking care of business, but have done most of what has been asked of them. Their offense has not quite clicked like last year, but they lead the league in point differential (and total points, though of course not every team has played six games), even after letting the Eagles climb back into things on Sunday. And of all the one-loss teams, their defeat is the easiest to excuse away. Their schedule is about to get more challenging, which should give us some fun games to watch. The Ravens have entered the realm where most fans will only care about what they do in the playoffs, but it’ll still be telling to see how they compete in potential previews against the teams they’ll meet when they get there.

6. Tampa Bay Buccaneers (4–2)

Last week: Win vs. Green Bay 38–10
Next week: at Las Vegas

A shout out to MMQB senior editor Gary Gramling, who had the Bucs third in the power rankings last week, even before Tampa Bay dismantled the Packers. Third! Last week! I laughed at him about it over Zoom. Thankfully I haven’t seen him in person in eight months, and he will likely forget about this by the time I see him again in, hopefully, 2021. I still think he’s a little too high on them, though I admire his willingness to make a statement. (Side note: It’s a little funny to think about how many people out there don’t realize The MMQB is rotating power rankings authors this year and will just look at websites that aggregate their favorite team’s rankings across the internet and see that Sports Illustrated has dropped the Bucs from No. 3 to No. 6.) Anyway, of course Tampa Bay looked like a top-five team on Sunday. But I guess you can call me stubborn that I’m not quite ready to leapfrog them over the Chiefs, the Ravens or the undefeated squads.

7. Green Bay Packers (4–1)

Last week: Loss at Tampa Bay 38–10
Next week: at Houston

The Packers are a good football team. Yes, they got waxed for the final three quarters against Tampa Bay, but I don’t want to overreact too much to one game. Look at the 49ers, who got crushed by the Dolphins in Week 5 and then came back to beat a strong Rams team. So of course the Packers slide down the board a bit, behind the undefeated teams and those Buccaneers, but you should never say the sky is falling after just one bad game—often no matter how bad it feels the day after.

8. Buffalo Bills (4–2)

Last week: Loss vs. Kansas City 26–17
Next week: at New York Jets

That’s two straight losses for the Bills now, but I wouldn’t say it’s knocked their season off course. They are the best team in their division, but seem to be a cut below the top-tier teams in the conference. They are dangerous enough to beat anyone on any given day, but come January they’ll likely have to do it on the road.

9. Los Angeles Rams (4–2)

Last week: Loss at San Francisco 24–16
Next week: vs. Chicago (Monday)

The Rams appeared to be cruising until they went into Santa Clara Sunday night, even if an NFC East sweep doesn’t mean much this year. I’m still bullish on them in the NFC playoff race, and I don’t want to overreact to one division loss on the road. (As mentioned in the intro: Trying not to overreact to one game is a common theme in this column!) But it sure seems like the Rams will need to go .500 in their division games, given that everyone else is fighting for those playoff spots too. And they’ve gotten off on the wrong foot there.

10. Arizona Cardinals (4–2)

Last week: Win at Dallas 38–10
Next week: vs. Seattle

We know the Cardinals have firepower, which they showed on Monday night against a depleted Cowboys team. At their best, they are certainly good enough to compete in the NFC’s second-tier. But they are also a young team that could face some growing pains, as we saw in the back-to-back losses to the Lions and Panthers. But I’m definitely buying the future of the Kliff/Kyler Cardinals and excited to see them engage the Seahawks in a shootout next week.

11. New Orleans Saints (3–2)

Last week: Bye
Next week: vs. Carolina

The Drew Brees discourse (and it’s close relative, the Taysom Hill discourse) would probably be a dominant story line of this season anyway, but it feels like it’s been even more under the microscope because the Saints played three of their first five games in prime time, plus the Week 1 game against the Bucs was the top game in the late Sunday window. Now they return from a bye week to a cozy spot in the 1:00 ET window. I think many of us expected to see New Orleans dominate the regular season or at the very least sleepwalk their way into the playoffs, but I think watching them have to scratch and claw in what feels like the end of the Brees era is a much more entertaining plot if that’s what they need to do as the season rolls on.

12. Chicago Bears (5–1)

Last week: Win at Carolina 23–16
Next week: at Los Angeles Rams (Monday)

The Bears are 5–1! They are doing it on the strength of their defense, which is a recipe not many teams are following here in 2020. I’m not sure if they can sustain this for the whole season, especially as the schedule tightens up, but they have banked enough wins already to be in the playoff chase for the long haul.

13. Indianapolis Colts (4–2)

Last week: Win vs. Cincinnati 31–27
Next week: Bye

Indy won in a very un-2020-Colts-like fashion Sunday, with the previously vaunted defense giving up a quick 24 points and Philip Rivers leading them back. I haven’t been as high on them as many others have been this year, and was ready to drop them into the bottom half of these rankings after five quarters against the two Ohio teams. But it was encouraging to see they can win a game when Rivers was forced to throw it 44 times. Still, I think this team needs the defense to carry it, and that can be tough in a league where so many teams are capable of scoring so many points so quickly.

14. Las Vegas Raiders (3–2)

Last week: Bye
Next week: vs. Tampa Bay

The Raiders probably would’ve liked to get right back on the field, given that we last saw them taking down the mighty Chiefs. But they’ll return in prime time against the Bucs, with a chance to prove they belong among the AFC’s top teams.

15. San Francisco 49ers (3–3)

Last week: Loss vs. Los Angeles Rams 24–16
Next week: at New England

The 49ers’ last couple games have given us a perfect example of why you can’t overreact to each game. You might have thought the season was spiraling two Sundays ago when they received a beatdown from the Dolphins. They came right back with a smart game plan to get the ball out of Jimmy Garoppolo’s hands quickly and let his receivers do the rest. For a team that is ravaged with injuries and facing as brutal a schedule stretch as any team faces this year, that was a massive win against the division rival Rams.

16. Cleveland Browns (4–2)

Last week: Loss at Pittsburgh 38–7
Next week: at Cincinnati

Cleveland took a huge, huge step backward, getting blown out by the Steelers, but any Browns fan reading this either A) Already knew that; and/or B) Saw Sunday coming a mile away. The Browns are firmly part of the NFL’s vast group of middle teams that’s good enough to beat a good team on any given week but prone to lay an occasional egg. Of course the injuries to some of the team’s key players played a part Sunday, but plenty of other teams are dealing with lengthy injury reports too.

The Weak-Side Podcast now has its own feed! Subscribe to listen to Conor Orr and Jenny Vrentas every week.

17. New England Patriots (2–3)

Last week: Loss vs. Denver 18–12
Next week: vs. San Francisco

Several teams have had their seasons derailed by COVID-19, but few have been impacted more than the Patriots, who had their schedule moved around, lost several key players, lost significant practice time and lost their starting QB for some time. Cam Newton returned to the lineup Sunday, but it felt like a long time had passed since the opening weeks of the season when he made even non-Patriots fans seem to soften and find joy in an exciting new chapter in Foxboro. Not many people want to have sympathy for Bill Belichick or Patriots fans this year (and he wasn’t necessarily asking for it), but the coach had a fair point when he spoke about a lack of practice time after the loss to Denver. I expect them to get back on track.

18. Carolina Panthers (3–3)

Last week: Loss vs. Chicago 23–16
Next week: at New Orleans

The Panthers have been one of the feel-good stories of the 2020 season. The three-game winning streak (without Chrisitan McCaffrey) was bound to end at some point, so I won’t get bent out of shape about a loss to the Bears. The Panthers are playing with house money and how could anyone not be happy with what they’ve seen from Matt Rhule’s first season in Carolina?

19. Miami Dolphins (3–3)

Last week: Win vs. New York Jets 24–0
Next week: Bye

We don’t learn too much about the teams that get their weekly chance to pummel the Jets, but it still must be fun for the teams that get the privilege. Still, Miami has now won three of four and it’s been impressive to see Brian Flores have his team ahead of schedule in each of his seasons in South Florida. Tua Tagovailoa finally saw game action on Sunday, and the biggest question of the season remains when the Dolphins will make the switch. Would a run at a playoff spot delay the transition from Ryan Fitzpatrick? With some winnable games on the schedule, we may find out.

20. Los Angeles Chargers (1–4)

Last week: Bye
Next week: vs. Jacksonville

New quarterback, same Chargers. The Bolts have pushed some of the league’s best teams to the brink, but found a way to lose several close games in Chargers fashion. It’s hard to judge the early part of the Chargers’ season. Lots of teams with young quarterbacks would be happy with moral victories. And in the long run, the Chargers may benefit from Justin Herbert being pressed into duty earlier than expected. But they are also a team that had designs on a playoff push this year, so the 1–4 record is still a bummer. But they are not a tough out, by any means.

21. Denver Broncos (2–3)

Last week: Win at New England 18–12
Next week: vs. Kansas City

The Broncos’ defense deserves plenty of credit for showing up in Foxboro this weekend, but it’s awfully tough to win without turning field goal drives into touchdowns. Still, Denver somewhat quietly has a decent resume, with close losses to Tennessee and Pittsburgh aging well and now a couple wins. Getting Drew Lock back in the lineup is a huge boost, but his stat line (10-for-24 for 189 yards and two interceptions) is not what they wanted to see. This is another team battered by injuries that I just have a hard time seeing in the playoff hunt when the dust settles.

22. Houston Texans (1–5)

Last week: Loss at Tennessee 42–36 OT
Next week: vs. Green Bay

Sunday could have been a statement win for the Texans, and I am very much fine with Romeo Crennel’s decision to go for two to win the game. His team employs noted two-point conversion converter Deshaun Watson, and a decent percentage of the people who are ripping him for the decision would be praising him if the move had worked out. It would have been a great win for a team that could have then waved away the bad start as a result of a brutal opening schedule and an unpopular coach. But crawling out of a 1–5 hole is likely too tall a task. Alas.

23. Philadelphia Eagles (1-4-1)

Last week: Loss vs. Baltimore 30–28
Next week: vs. New York Giants (Thursday)

The Eagles are one of the hardest teams in the league to rank. Carson Wentz does appear to be getting back on track, even though he showed it in losses the last two weeks to the heavily-favored Steelers and Ravens. And of course, getting back on track is not the same as playing as well as his most vocal supporters expected this season. The team suffered even more injuries, now heading into a Thursday night game. Seven wins might be enough to take the division (could six? could five?), but the Eagles have to actually start winning those games now that the schedule gets easier, and they haven’t given fans any reason to think they’ll start playing consistently good football.

24. Detroit Lions (2–3)

Last week: Win at Jacksonville 34–16
Next week: at Atlanta

It was nice to see D’Andre Swift have his first big game as a pro during a get-right win against the Jaguars. The Lions are good enough to escape the league’s bottom tier but I have a hard time believing they’re going to win enough games to keep the Matt Patricia era going into 2021.

25. Dallas Cowboys (2–4)

Last week: Loss vs. Arizona 38–10
Next week: at Washington

If Andy Dalton’s game-winning drive at the end of the Giants’ game raised your expectation for what he can do the rest of the season, Monday night was probably a rude awakening. We also learned a little more—not that we necessarily had to—about how good Dak Prescott really was before his injury. The Cowboys have had injuries at other positions too, which is a big reason their outlook is so dire.

26. Minnesota Vikings (1–5)

Last week: Loss vs. Atlanta 40–23
Next week: Bye

The Vikings have mercifully arrived at their bye week at an ugly 1–5, a mark that includes three games they weren’t really in and two soul-crushing one-point losses. Which type of loss would you rather your team suffer? Well, the answer certainly isn’t, “Both!” They need to figure things out right now and save their season in a stretch against all three NFC North rivals right in a row. But it’s hard to have any confidence that one of the league’s most disappointing teams in 2020 can actually pull that off.

27. Cincinnati Bengals (1-4-1)

Last week: Loss at Indianapolis 31–27
Next week: vs. Cleveland

In many ways, the Bengals are playing with house money this year. The wins and losses don’t really matter, and they can consider it a successful season if Joe Burrow and some of the other young players on the roster develop. But wins are still nice! And that’s a major disappointment to blow a 24–0 lead against the Colts in what should have been a bounce-back game after having the doors blown off against the Ravens. But young teams must learn lessons like this in the journey up from the bottom.

28. Atlanta Falcons (1–5)

Last week: Win at Minnesota 40–23
Next week: vs. Detroit

The Falcons finally won a game, and they certainly needed it. They couldn’t lose ‘em all, and in fact will probably win a few more. I don’t think they’ll finish the season this far down the list, but this is what happens when you start the year 0–5 and fire your head coach and general manager.

29. New York Giants (1–5)

Last week: Win vs. Washington 20–19
Next week: at Philadelphia (Thursday)

The Giants finally got off the schneid against Washington, a win they desperately needed. I wouldn’t say their record is deceptive, but they have been competitive in all but one of their games, all of which were against playoff contenders until this Sunday. This middle part of their schedule is much easier than the opening month, and if they can split with the Eagles and beat Washington again, they could be 3–7 and somehow in the NFC East chase on Thanksgiving.

30. Jacksonville Jaguars (1–5)

Last week: Loss vs. Detroit 34–16
Next week: at Los Angeles Chargers

The last time I was in charge of the power rankings was after Week 1, with the Jags fresh off their upset win over the Colts (after I suggested that I actually liked the Jacksonville moneyline on the MMQB Gambling Podcast). The Jags had been a near unanimous No. 32 in power rankings across the internet, so when I moved them to 28th, I wrote, “Imagine not rooting for the 2020 Jaguars. Couldn’t be me.” I foolishly thought Jags fans might be happy to see their team moving up after a feel-good opening weekend. Instead, I was ridiculed on Jaguars reddit as “absolute hot trash” for ranking them below a bunch of teams that had lost in Week 1. The Jaguars have not won since. Thirtieth feels about right for now.

31. Washington Football Team (1–5)

Last week: Loss at New York Giants 20–19
Next week: vs. Dallas

Washington’s Week 1 shellacking of the Eagles’ offensive line looks to have given Ron Rivera some false sense that the team was ready to win some games. Since then, the team has been uncompetitive against good teams and then came up just short against a winless Giants team. It seems silly not to use the rest of this season giving Dwayne Haskins a chance. Who cares if you think Kyle Allen can get you to 5–11?

32. New York Jets (0–6)

Last week: Loss at Miami 24–0
Next week: vs. Buffalo

The Jets are a total embarrassment, on the field and off.


Continue Reading

US Election

US Election Remaining