Connect with us

Latest

The First Few Milliseconds of an HTTPS Connection

Mish Boyka

Published

on

 

Convinced from spending hours reading rave reviews, Bob eagerly clicked “Proceed to Checkout” for his gallon of Tuscan Whole Milk and…

Whoa! What just happened?

In the 220 milliseconds that flew by, a lot of interesting stuff happened to make Firefox change the address bar color and put a lock in the lower right corner. With the help of Wireshark, my favorite network tool, and a slightly modified debug build of Firefox, we can see exactly what’s going on.

By agreement of RFC 2818, Firefox knew that “https” meant it should connect to port 443 at Amazon.com:

Most people associate HTTPS with SSL (Secure Sockets Layer) which was created by Netscape in the mid 90’s. This is becoming less true over time. As Netscape lost market share, SSL’s maintenance moved to the Internet Engineering Task Force (IETF). The first post-Netscape version was re-branded as Transport Layer Security (TLS) 1.0 which was released in January 1999. It’s rare to see true “SSL” traffic given that TLS has been around for 10 years.

Client Hello

TLS wraps all traffic in “records” of different types. We see that the first byte out of our browser is the hex byte 0x16 = 22 which means that this is a “handshake” record:

The next two bytes are 0x0301 which indicate that this is a version 3.1 record which shows that TLS 1.0 is essentially SSL 3.1.

The handshake record is broken out into several messages. The first is our “Client Hello” message (0x01). There are a few important things here:

  • Random:

    There are four bytes representing the current Coordinated Universal Time (UTC) in the Unix epoch format, which is the number of seconds since January 1, 1970. In this case, 0x4a2f07ca. It’s followed by 28 random bytes. This will be used later on.
  • Session ID:

    Here it’s empty/null. If we had previously connected to Amazon.com a few seconds ago, we could potentially resume a session and avoid a full handshake.
  • Cipher Suites:

    This is a list of all of the encryption algorithms that the browser is willing to support. Its top pick is a very strong choice of “TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA” followed by 33 others that it’s willing to accept. Don’t worry if none of that makes sense. We’ll find out later that Amazon doesn’t pick our first choice anyway.
  • server_name extension:

    This is a way to tell Amazon.com that our browser is trying to reach https://www.amazon.com/. This is really convenient because our TLS handshake occurs long before any HTTP traffic. HTTP has a “Host” header which allows a cost-cutting Internet hosting companies to pile hundreds of websites onto a single IP address. SSL has traditionally required a different IP for each site, but this extension allows the server to respond with the appropriate certificate that the browser is looking for. If nothing else, this extension should allow an extra week or so of IPv4 addresses.

Server Hello

Amazon.com replies with a handshake record that’s a massive two packets in size (2,551 bytes). The record has version bytes of 0x0301 meaning that Amazon agreed to our request to use TLS 1.0. This record has three sub-messages with some interesting data:

  1. “Server Hello” Message (2):

    • We get the server’s four byte time Unix epoch time representation and its 28 random bytes that will be used later.
    • A 32 byte session ID in case we want to reconnect without a big handshake.
    • Of the 34 cipher suites we offered, Amazon picked “TLS_RSA_WITH_RC4_128_MD5” (0x0004). This means that it will use the “RSApublic key algorithm to verify certificate signatures and exchange keys, the RC4 encryption algorithm to encrypt data, and the MD5 hash function to verify the contents of messages. We’ll cover these in depth later on. I personally think Amazon had selfish reasons for choosing this cipher suite. Of the ones on the list, it was the one that was least CPU intensive to use so that Amazon could crowd more connections onto each of their servers. A much less likely possibility is that they wanted to pay special tribute to Ron Rivest, who created all three of these algorithms.
  2. Certificate Message (11):

    • This message takes a whopping 2,464 bytes and is the certificate that the client can use to validate Amazon’s. It isn’t anything fancy. You can view most of its contents in your browser:
  3. “Server Hello Done” Message (14):

    • This is a zero byte message that tells the client that it’s done with the “Hello” process and indicate that the server won’t be asking the client for a certificate.

Checking out the Certificate

The browser has to figure out if it should trust Amazon.com. In this case, it’s using certificates. It looks at Amazon’s certificate and sees that the current time is between the “not before” time of August 26th, 2008 and before the “not after” time of August 27, 2009. It also checks to make sure that the certificate’s public key is authorized for exchanging secret keys.

Why should we trust this certificate?

Attached to the certificate is a “signature” that is just a really long number in big-endian format:

Anyone could have sent us these bytes. Why should we trust this signature? To answer that question, need to make a speedy detour into mathemagic land:

Interlude: A Short, Not Too Scary, Guide to RSA

People sometimes wonder if math has any relevance to programming. Certificates give a very practical example of applied math. Amazon’s certificate tells us that we should use the RSA algorithm to check the signature. RSA was created in the 1970’s by MIT professors Ron Rivest, Adi Shamir, and Len Adleman who found a clever way to combine ideas spanning 2000 years of math development to come up with a beautifully simple algorithm:

You pick two huge prime numbers “p” and “q.” Multiply them to get “n = p*q.” Next, you pick a small public exponent “e” which is the “encryption exponent” and a specially crafted inverse of “e” called “d” as the “decryption exponent.” You then make “n” and “e” public and keep “d” as secret as you possibly can and then throw away “p” and “q” (or keep them as secret as “d”). It’s really important to remember that “e” and “d” are inverses of each other.

Now, if you have some message, you just need to interpret its bytes as a number “M.” If you want to “encrypt” a message to create a “ciphertext”, you’d calculate:

C ≡ Me (mod n)

This means that you multiply “M” by itself “e” times. The “mod n” means that we only take the remainder (e.g. “modulus”) when dividing by “n.” For example, 11 AM + 3 hours ≡ 2 (PM) (mod 12 hours). The recipient knows “d” which allows them to invert the message to recover the original message:

Cd ≡ (Me)d ≡ Me*d ≡ M1 ≡ M (mod n)

Just as interesting is that the person with “d” can “sign” a document by raising a message “M” to the “d” exponent:

Md ≡ S (mod n)

This works because “signer” makes public “S”, “M”, “e”, and “n.” Anyone can verify the signature “S” with a simple calculation:

Se ≡ (Md)e ≡ Md*e ≡ Me*d ≡ M1 ≡ M (mod n)

Public key cryptography algorithms like RSA are often called “asymmetric” algorithms because the encryption key (in our case, “e”) is not equal to (e.g. “symmetric” with) the decryption key “d”. Reducing everything “mod n” makes it impossible to use the easy techniques that we’re used to such as normal logarithms. The magic of RSA works because you can calculate/encrypt C ≡ Me (mod n) very quickly, but it is really hard to calculate/decrypt Cd ≡ M (mod n) without knowing “d.” As we saw earlier, “d” is derived from factoring “n” back to its “p” and “q”, which is a tough problem.

Verifying Signatures

The big thing to keep in mind with RSA in the real world is that all of the numbers involved have to be big to make things really hard to break using the best algorithms that we have. How big? Amazon.com’s certificate was “signed” by “VeriSign Class 3 Secure Server CA.” From the certificate, we see that this VeriSign modulus “n” is 2048 bits long which has this 617 digit base-10 representation:


1890572922 9464742433 9498401781 6528521078 8629616064
3051642608 4317020197 7241822595 6075980039 8371048211
4887504542 4200635317 0422636532 2091550579 0341204005
1169453804 7325464426 0479594122 4167270607 6731441028
3698615569 9947933786 3789783838 5829991518 1037601365
0218058341 7944190228 0926880299 3425241541 4300090021
1055372661 2125414429 9349272172 5333752665 6605550620
5558450610 3253786958 8361121949 2417723618 5199653627
5260212221 0847786057 9342235500 9443918198 9038906234
1550747726 8041766919 1500918876 1961879460 3091993360
6376719337 6644159792 1249204891 7079005527 7689341573
9395596650 5484628101 0469658502 1566385762 0175231997
6268718746 7514321

(Good luck trying to find “p” and “q” from this “n” – if you could, you could generate real-looking VeriSign certificates.)

VeriSign’s “e” is 216 + 1 = 65537. Of course, they keep their “d” value secret, probably on a safe hardware device protected by retinal scanners and armed guards. Before signing, VeriSign checked the validity of the contents that Amazon.com claimed on its certificate using a real-world “handshake” that involved looking at several of their business documents. Once VeriSign was satisfied with the documents, they used the SHA-1 hash algorithm to get a hash value of the certificate that had all the claims. In Wireshark, the full certificate shows up as the “signedCertificate” part:

It’s sort of a misnomer since it actually means that those are the bytes that the signer is going to sign and not the bytes that already include a signature.

The actual signature, “S”, is simply called “encrypted” in Wireshark. If we raise “S” to VeriSign’s public “e” exponent of 65537 and then take the remainder when divided by the modulus “n”, we get this “decrypted” signature hex value:


0001FFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFF
FFFFFFFF00302130 0906052B0E03021A 05000414C19F8786
871775C60EFE0542 E4C2167C830539DB

Per the PKCS #1 v1.5 standard, the first byte is “00” and it “ensures that the encryption block, [when] converted to an integer, is less than the modulus.” The second byte of “01” indicates that this is a private key operation (e.g. it’s a signature). This is followed by a lot of “FF” bytes that are used to pad the result to make sure that it’s big enough. The padding is terminated by a “00” byte. It’s followed by “30 21 30 09 06 05 2B 0E 03 02 1A 05 00 04 14” which is the PKCS #1 v2.1 way of specifying the SHA-1 hash algorithm. The last 20 bytes are SHA-1 hash digest of the bytes in “signedCertificate.”

Since the decrypted value is properly formatted and the last bytes are the same hash value that we can calculate independently, we can assume that whoever knew “VeriSign Class 3 Secure Server CA”’s private key “signed” it. We implicitly trust that only VeriSign knows the private key “d.”

We can repeat the process to verify that “VeriSign Class 3 Secure Server CA”’s certificate was signed by VeriSign’s “Class 3 Public Primary Certification Authority.”

But why should we trust that? There are no more levels on the trust chain.

The top “VeriSign Class 3 Public Primary Certification Authority” was signed by itself. This certificate has been built into Mozilla products as an implicitly trusted good certificate since version 1.4 of certdata.txt in the Network Security Services (NSS) library. It was checked-in on September 6, 2000 by Netscape’s Robert Relyea with the following comment:

“Make the framework compile with the rest of NSS. Include a ‘live’ certdata.txt with those certs we have permission to push to open source (additional certs will be added as we get permission from the owners).”

This decision has had a relatively long impact since the certificate has a validity range of January 28, 1996 – August 1, 2028.

As Ken Thompson explained so well in his “Reflections on Trusting Trust”, you ultimately have to implicitly trust somebody. There is no way around this problem. In this case, we’re implicitly trusting that Robert Relyea made a good choice. We also hope that Mozilla’s built-in certificate policy is reasonable for the other built-in certificates.

One thing to keep in mind here is that all these certificates and signatures were simply used to form a trust chain. On the public Internet, VeriSign’s root certificate is implicitly trusted by Firefox long before you go to any website. In a company, you can create your own root certificate authority (CA) that you can install on everyone’s machine.

Alternatively, you can get around having to pay companies like VeriSign and avoid certificate trust chains altogether. Certificates are used to establish trust by using a trusted third-party (in this case, VeriSign). If you have a secure means of sharing a secret “key”, such as whispering a long password into someone’s ear, then you can use that pre-shared key (PSK) to establish trust. There are extensions to TLS to allow this, such as TLS-PSK, and my personal favorite, TLS with Secure Remote Password (SRP) extensions. Unfortunately, these extensions aren’t nearly as widely deployed and supported, so they’re usually not practical. Additionally, these alternatives impose a burden that we have to have some other secure means of communicating the secret that’s more cumbersome than what we’re trying to establish with TLS (otherwise, why wouldn’t we use that for everything?).

One final check that we need to do is to verify that the host name on the certificate is what we expected. Nelson Bolyard’s comment in the SSL_AuthCertificate function explains why:

/* cert is OK. This is the client side of an SSL connection.
 * Now check the name field in the cert against the desired hostname.
 * NB: This is our only defense against Man-In-The-Middle (MITM) attacks! 
 */

This check helps prevent against a man-in-the-middle attack because we are implicitly trusting that the people on the certificate trust chain wouldn’t do something bad, like sign a certificate claiming to be from Amazon.com unless it actually was Amazon.com. If an attacker is able to modify your DNS server by using a technique like DNS cache poisoning, you might be fooled into thinking you’re at a trusted site (like Amazon.com) because the address bar will look normal. This last check implicitly trusts certificate authorities to stop these bad things from happening.

Pre-Master Secret

We’ve verified some claims about Amazon.com and know its public encryption exponent “e” and modulus “n.” Anyone listening in on the traffic can know this as well (as evidenced because we are using Wireshark captures). Now we need to create a random secret key that an eavesdropper/attacker can’t figure out. This isn’t as easy as it sounds. In 1996, researchers figured out that Netscape Navigator 1.1 was using only three sources to seed their pseudo-random number generator (PRNG). The sources were: the time of day, the process id, and the parent process id. As the researchers showed, these “random” sources aren’t that random and were relatively easy to figure out.

Since everything else was derived from these three “random” sources, it was possible to “break” the SSL “security” in 25 seconds on a 1996 era machine. If you still don’t believe that finding randomness is hard, just ask the Debian OpenSSL maintainers. If you mess it up, all the security built on top of it is suspect.

On Windows, random numbers used for cryptographic purposes are generated by calling the CryptGenRandom function that hashes bits sampled from over 125 sources. Firefox uses this function along with some bits derived from its own function to seed its pseudo-random number generator.

The 48 byte “pre-master secret” random value that’s generated isn’t used directly, but it’s very important to keep it secret since a lot of things are derived from it. Not surprisingly, Firefox makes it hard to find out this value. I had to compile a debug version and set the SSLDEBUGFILE and SSLTRACE environment variables to see it.

In this particular session, the pre-master secret showed up in the SSLDEBUGFILE as:


4456: SSL[131491792]: Pre-Master Secret [Len: 48]
03 01 bb 7b 08 98 a7 49 de e8 e9 b8 91 52 ec 81 ...{...I.....R..
4c c2 39 7b f6 ba 1c 0a b1 95 50 29 be 02 ad e6 L.9{......P)....
ad 6e 11 3f 20 c4 66 f0 64 22 57 7e e1 06 7a 3b .n.? .f.d"W~..z;

Note that it’s not completely random. The first two bytes are, by convention, the TLS version (03 01).

Trading Secrets

We now need to get this secret value over to Amazon.com. By Amazon’s wishes of “TLS_RSA_WITH_RC4_128_MD5”, we will use RSA to do this. You could make your input message equal to just the 48 byte pre-master secret, but the Public Key Cryptography Standard (PKCS) #1, version 1.5 RFC tells us that we should pad these bytes with random data to make the input equal to exactly the size of the modulus (1024 bits/128 bytes). This makes it harder for an attacker to determine our pre-master secret. It also gives us one last chance to protect ourselves in case we did something really bone-headed, like reusing the same secret. If we reused the key, the eavesdropper would likely see a different value placed on the network due to the random padding.

Again, Firefox makes it hard to see these random values. I had to insert debugging statements into the padding function to see what was going on:

wrapperHandle = fopen("plaintextpadding.txt", "a");
fprintf(wrapperHandle, "PLAINTEXT = ");
for(i = 0; i < modulusLen; i++)
{
    fprintf(wrapperHandle, "%02X ", block[i]);
}
fprintf(wrapperHandle, "rn");
fclose(wrapperHandle);

In this session, the full padded value was:


00 02 12 A3 EA B1 65 D6 81 6C 13 14 13 62 10 53 23 B3 96 85 FF 24
FA CC 46 11 21 24 A4 81 EA 30 63 95 D4 DC BF 9C CC D0 2E DD 5A A6
41 6A 4E 82 65 7D 70 7D 50 09 17 CD 10 55 97 B9 C1 A1 84 F2 A9 AB
EA 7D F4 CC 54 E4 64 6E 3A E5 91 A0 06 00 03 01 BB 7B 08 98 A7 49
DE E8 E9 B8 91 52 EC 81 4C C2 39 7B F6 BA 1C 0A B1 95 50 29 BE 02
AD E6 AD 6E 11 3F 20 C4 66 F0 64 22 57 7E E1 06 7A 3B

Firefox took this value and calculated “C ≡ Me (mod n)” to get the value we see in the “Client Key Exchange” record:

Finally, Firefox sent out one last unencrypted message, a “Change Cipher Spec” record:

This is Firefox’s way of telling Amazon that it’s going to start using the agreed upon secret to encrypt its next message.

Deriving the Master Secret

If we’ve done everything correctly, both sides (and only those sides) now know the 48 byte (256 bit) pre-master secret. There’s a slight trust issue here from Amazon’s perspective: the pre-master secret just has bits that were generated by the client, they don’t take anything into account from the server or anything we said earlier. We’ll fix that be computing the “master secret.” Per the spec, this is done by calculating:

master_secret = PRF(pre_master_secret, 
                    "master secret", 
                    ClientHello.random + ServerHello.random)

The “pre_master_secret” is the secret value we sent earlier. The “master secret” is simply a string whose ASCII bytes (e.g. “6d 61 73 74 65 72 …”) are used. We then concatenate the random values that were sent in the ClientHello and ServerHello (from Amazon) messages that we saw at the beginning.

The PRF is the “Pseudo-Random Function” that’s also defined in the spec and is quite clever. It combines the secret, the ASCII label, and the seed data we give it by using the keyed-Hash Message Authentication Code (HMAC) versions of both MD5 and SHA-1 hash functions. Half of the input is sent to each hash function. It’s clever because it is quite resistant to attack, even in the face of weaknesses in MD5 and SHA-1. This process can feedback on itself and iterate forever to generate as many bytes as we need.

Following this procedure, we obtain a 48 byte “master secret” of


4C AF 20 30 8F 4C AA C5 66 4A 02 90 F2 AC 10 00 39 DB 1D E0 1F CB
E0 E0 9D D7 E6 BE 62 A4 6C 18 06 AD 79 21 DB 82 1D 53 84 DB 35 A7
1F C1 01 19

Generating Lots of Keys

Now that both sides have a “master secrets”, the spec shows us how we can derive all the needed session keys we need using the PRF to create a “key block” where we will pull data from:


key_block = PRF(SecurityParameters.master_secret,
"key expansion",
SecurityParameters.server_random +
SecurityParameters.client_random);

The bytes from “key_block” are used to populate the following:


client_write_MAC_secret[SecurityParameters.hash_size]
server_write_MAC_secret[SecurityParameters.hash_size]
client_write_key[SecurityParameters.key_material_length]
server_write_key[SecurityParameters.key_material_length]
client_write_IV[SecurityParameters.IV_size]
server_write_IV[SecurityParameters.IV_size]

Since we’re using a stream cipher instead of a block cipher like the Advanced Encryption Standard (AES), we don’t need the Initialization Vectors (IVs). Therefore, we just need two Message Authentication Code (MAC) keys for each side that are 16 bytes (128 bits) each since the specified MD5 hash digest size is 16 bytes. In addition, the RC4 cipher uses a 16 byte (128 bit) key that both sides will need as well. All told, we need 216 + 216 = 64 bytes from the key block.

Running the PRF, we get these values:


client_write_MAC_secret = 80 B8 F6 09 51 74 EA DB 29 28 EF 6F 9A B8 81 B0
server_write_MAC_secret = 67 7C 96 7B 70 C5 BC 62 9D 1D 1F 4A A6 79 81 61
client_write_key = 32 13 2C DD 1B 39 36 40 84 4A DE E5 6C 52 46 72
server_write_key = 58 36 C4 0D 8C 7C 74 DA 6D B7 34 0A 91 B6 8F A7

Prepare to be Encrypted!

The last handshake message the client sends out is the “Finished message.” This is a clever message that proves that no one tampered with the handshake and it proves that we know the key. The client takes all bytes from all handshake messages and puts them into a “handshake_messages” buffer. We then calculate 12 bytes of “verify_data” using the pseudo-random function (PRF) with our master key, the label “client finished”, and an MD5 and SHA-1 hash of “handshake_messages”:


verify_data = PRF(master_secret,
"client finished",
MD5(handshake_messages) +
SHA-1(handshake_messages)
) [12]

We take the result and add a record header byte “0x14” to indicate “finished” and length bytes “00 00 0c” to indicate that we’re sending 12 bytes of verify data. Then, like all future encrypted messages, we need to make sure the decrypted contents haven’t been tampered with. Since our cipher suite in use is TLS_RSA_WITH_RC4_128_MD5, this means we use the MD5 hash function.

Some people get paranoid when they hear MD5 because it has some weaknesses. I certainly don’t advocate using it as-is. However, TLS is smart in that it doesn’t use MD5 directly, but rather the HMAC version of it. This means that instead of using MD5(m) directly, we calculate:


HMAC_MD5(Key, m) = MD5((Key ⊕ opad) ++ MD5((Key ⊕ ipad) ++ m)

(The ⊕ means XOR, ++ means concatenate, “opad” is the bytes “5c 5c … 5c”, and “ipad” is the bytes “36 36 … 36”).

In particular, we calculate:


HMAC_MD5(client_write_MAC_secret,
seq_num +
TLSCompressed.type +
TLSCompressed.version +
TLSCompressed.length +
TLSCompressed.fragment));

As you can see, we include a sequence number (“seq_num”) along with attributes of the plaintext message (here it’s called “TLSCompressed”). The sequence number foils attackers who might try to take a previously encrypted message and insert it midstream. If this occurred, the sequence numbers would definitely be different than what we expected. This also protects us from an attacker dropping a message.

All that’s left is to encrypt these bytes.

RC4 Encryption

Our negotiated cipher suite was TLS_RSA_WITH_RC4_128_MD5. This tells us that we need to use Ron’s Code #4 (RC4) to encrypt the traffic. Ron Rivest developed the RC4 algorithm to generate random bytes based on a 256 byte key. The algorithm is so simple you can actually memorize it in a few minutes.

RC4 begins by creating a 256-byte “S” byte array and populating it with 0 to 255. You then iterate over the array by mixing in bytes from the key. You do this to create a state machine that is used to generate “random” bytes. To generate a random byte, we shuffle around the “S” array.

Put graphically, it looks like this:

To encrypt a byte, we xor this pseudo-random byte with the byte we want to encrypt. Remember that xor’ing a bit with 1 causes it to flip. Since we’re generating random numbers, on average the xor will flip half of the bits. This random bit flipping is effectively how we encrypt data. As you can see, it’s not very complicated and thus it runs quickly. I think that’s why Amazon chose it.

Recall that we have a “client_write_key” and a “server_write_key.” The means we need to create two RC4 instances: one to encrypt what our browser sends and the other to decrypt what the server sent us.

The first few random bytes out of the “client_write” RC4 instance are “7E 20 7A 4D FE FB 78 A7 33 …” If we xor these bytes with the unencrypted header and verify message bytes of “14 00 00 0C 98 F0 AE CB C4 …”, we’ll get what appears in the encrypted portion that we can see in Wireshark:

The server does almost the same thing. It sends out a “Change Cipher Spec” and then a “Finished Message” that includes all handshake messages, including the decrypted version of the client’s “Finished Message.” Consequently, this proves to the client that the server was able to successfully decrypt our message.

Welcome to the Application Layer!

Now, 220 milliseconds after we started, we’re finally ready for the application layer. We can now send normal HTTP traffic that’ll be encrypted by the TLS layer with the RC4 write instance and decrypt traffic with the server RC4 write instance. In addition, the TLS layer will check each record for tampering by computing the HMAC_MD5 hash of the contents.

At this point, the handshake is over. Our TLS record’s content type is now 23 (0x17). Encrypted traffic begins with “17 03 01” which indicate the record type and TLS version. These bytes are followed by our encrypted size, which includes the HMAC hash.

Encrypting the plaintext of:

GET /gp/cart/view.html/ref=pd_luc_mri HTTP/1.1 
Host: www.amazon.com 
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.10) Gecko/2009060911 Minefield/3.0.10 (.NET CLR 3.5.30729) 
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 
Accept-Language: en-us,en;q=0.5 
Accept-Encoding: gzip,deflate 
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 
Keep-Alive: 300 
Connection: keep-alive 
...

will give us the bytes we see on the wire:

The only other interesting fact is that the sequence number increases on each record, it’s now 1 (and the next record will be 2, etc).

The server does the same type of thing on its side using the server_write_key. We see its response, including the tell-tale application data header:

Decrypting this gives us:

HTTP/1.1 200 OK 
Date: Wed, 10 Jun 2009 01:09:30 GMT 
Server: Server 
... 
Cneonction: close 
Transfer-Encoding: chunked

which is a normal HTTP reply that includes a non-descriptive “Server: Server” header and a misspelled “Cneonction: close” header coming from Amazon’s load balancers.

TLS is just below the application layer. The HTTP server software can act as if it’s sending unencrypted traffic. The only change is that it writes to a library that does all the encryption. OpenSSL is a popular open-source library for TLS.

The connection will stay open while both sides send and receive encrypted data until either side sends out a “closure alert” message and then closes the connection. If we reconnect shortly after disconnecting, we can re-use the negotiated keys (if the server still has them cached) without using public key operations, otherwise we do a completely new full handshake.

It’s important to realize that application data records can be anything. The only reason “HTTPS” is special is because the web is so popular. There are lots of other TCP/IP based protocols that ride on top of TLS. For example, TLS is used by FTPS and secure extensions to SMTP. It’s certainly better to use TLS than inventing your own solution. Additionally, you’ll benefit from a protocol that has withstood careful security GFN.

… And We’re Done!

The very readable TLS RFC covers many more details that were missed here. We covered just one single path in our observation of the 220 millisecond dance between Firefox and Amazon’s server. Quite a bit of the process was affected by the TLS_RSA_WITH_RC4_128_MD5 Cipher Suite selection that Amazon made with its ServerHello message. It’s a reasonable choice that slightly favors speed over security.

As we saw, if someone could secretly factor Amazon’s “n” modulus into its respective “p” and “q”, they could effectively decrypt all “secure” traffic until Amazon changes their certificate. Amazon counter-balances this concern this with a short one year duration certificate:

One of the cipher suites that was offered was “TLS_DHE_RSA_WITH_AES_256_CBC_SHA” which uses the Diffie-Hellman key exchange that has a nice property of “forward secrecy.” This means that if someone cracked the mathematics of the key exchange, they’d be no better off to decrypt another session. One downside to this algorithm is that it requires more math with big numbers, and thus is a little more computationally taxing on a busy server. The “Advanced Encryption Standard” (AES) algorithm was present in many of the suites that we offered. It’s different than RC4 in that it works on 16 byte “blocks” at a time rather than a single byte. Since its key can be up to 256 bits, many consider this to be more secure than RC4.

In just 220 milliseconds, two endpoints on the Internet came together, provided enough credentials to trust each other, set up encryption algorithms, and started to send encrypted traffic.

And to think, all of this just so Bob can buy milk.

UPDATE: I wrote a program that walks through the handshake steps mentioned in this article. I posted it to GitHub.

Latest

Youth sports have been hit with few coronavirus outbreaks so far. Why is ice hockey so different?

Emily walpole

Published

on

“Whole hockey teams are getting quarantined,” said Bellemore, a hockey parent, coach and president of the Manchester Youth Regional Hockey Association. “It’s getting very real.”

State officials and other authorities have been scrambling to mitigate the damage: On Nov. 12, seven governors in the Northeast banded together to ban all interstate youth hockey until at least the end of the year. The following week, health officials in Minnesota, where hockey is associated with the most clusters of any youth sport, put all sports on “pause” for four weeks. Many others have imposed new restrictions and safety measures on the game.

Youth sports — soccer, basketball, cross-country, swimming, whether held indoors or out, a source of American pride, prestige and bonding — were among the first gatherings to be allowed post-lockdown. Organizers worked closely with public health officials to make modifications that balance safety with maintaining the spirit of the games. This has worked to some extent.

While public health officials suspect off-field interactions may be contributing to community spread, there’s little hard data. In most areas, there have been few to no documented outbreaks, much less superspreader events.

Ice hockey is an anomaly. Scientists are studying hockey-related outbreaks hoping to find clues about the ideal conditions in which the coronavirus thrives — and how to stop it. Experts speculate that ice rinks may trap the virus around head level in a rink that, by design, restricts airflow, temperature and humidity.

The hockey-related cases have been especially striking, epidemiologists have said, because clubs followed Centers for Disease Control and Prevention limits on gathering size and had numerous social distancing measures in place. In retrospect, one mistake by some clubs was that until recently masks had been required on ice for only the two players doing the initial faceoff for the puck — although many players wore clear face shields, which theoretically should have a similar effect.

“We’re watching hockey very carefully because it’s the first major sport that’s been played indoors predominantly and also during the winter months,” said Ryan Demmer, an epidemiologist at the University of Minnesota’s School of Public Health.

Demmer said the cases provide some of the first real-world evidence to support early theories about the importance of how people breathe, ventilation, and the social dimensions of transmission.

One critical way hockey differs from other contact team sports is how players do line changes — substitutions of groups of players — and are expected to sprint for nearly the whole time they are on the ice. Experts say it probably leads to heavier breathing, resulting in more particles being exhaled and inhaled.

Jose-Luis Jimenez, an air engineer at the University of Colorado, speculated that the spaces occupied by rinks keep the virus suspended, perhaps six to nine feet, just above the ice. Similar outbreaks have been documented in other chilly venues — meat processing factories and at a curling match earlier in the pandemic.

“I suspect the air is stratified,” he said. “Much like in a cold winter night, you have these inversions where the cold air with the virus which is heavier stays closer to the ground. That gives players many more chances to breathe it in.”

Timothy McDonald, public health director in Needham, Mass., said we should not rule out the way kids socialize — in locker rooms, carpools and postgame gatherings — as potential contributing factors. By late October, his area had seen at least six coronavirus cases related to sports clusters that span a wide range of ages, from fifth-graders to high school sophomores. He said some of those children played on multiple sports teams, including hockey.

“We’ve seen a lot of people mingling after the game or having discussions and parents talking and letting kids play around after the game,” he said. “There’s no way to tell from our perspective whether it’s on the ice — or waiting for 10 or 15 minutes while everyone talks after the game.”

Many unknowns

When schools shut down in March, there was huge confusion about the extent to which could get the virus and transmit it to others. Today, cases among those younger than 18 are soaring. The American Academy of Pediatrics reported last week that more than 1.3 million children had tested positive for coronavirus during the pandemic. Nearly 154,000 children tested positive from Nov. 19 to 26.

Epidemiologists are uncertain where most of these transmissions are occurring, but early reports from the United States, bolstered by more robust data from Europe and Asia, suggest they are unlikely to be related to school. Emily Oster, a professor of economics at Brown University who has been tracking coronavirus outbreaks in schools, and others say they believe informal neighborhood get-togethers, youth sports and other activities may be contributing.

Rhode Island, for example, has reported that virtual-only learners are being infected at similar rates as those attending in-person school. Oster said infection rates seem to be going up nationwide, “whether schools are open or not.”

Joseph Allen, a researcher at the Harvard T.H. Chan School of Public Health, said he believes it was a mistake for school sports to shut down, because kids need physical activity, and some for-profit businesses filling the gaps may be operating in a way where “controls may not be as stringent.”

“Not having sports in schools ultimately leads to wider contact networks for many kids,” he explained.

David Rubin, director of the PolicyLab at the Children’s Hospital of Philadelphia, said the “disease reservoir was lower” related to children in the early fall, suggesting that sports played at that time — namely, soccer — weren’t contributing much to spread. “We saw very little transmission on the field of play,” he said.

“In winter sports, you now add the indoor element. And I think there’s a fair amount of concern that hockey certainly has transmission around the game,” he said.

A PolicyLab blog post last month recommended that if youth sports leagues want to preserve any opportunity to keep playing, they need to enact mandates that strictly curtail all off-field interaction. Even then, “the potential for on-field spread may be too overwhelming to continue safely with team competition during periods of widespread community transmission, and may need to be sacrificed to preserve in-school learning options, at least until early spring or transmission rates decrease substantially.”

When children’s sports started up again this summer, tensions flared among health officials, sports providers and families over which safety measures were necessary and which were over the top. In the pandemic world, soccer was sometimes played seven-on-seven instead of 11-on-11, and with kick-ins instead of throw-ins; basketball with every other spot in free-throw lineups empty; swim practices with some kids starting in the middle of lanes to ensure adequate spacing; cross-country with runners racing in small flights to minimize interactions.

But these modifications sent some families “jurisdiction shopping” to find places that allowed games to proceed as they had before the virus outbreak, and this was a part of what happened with hockey in New England.

Hockey culture

Ice hockey is part of the culture in this area of the country. Some kids get their first skates almost as soon as they can walk, and family weekends revolve around games. In the aftermath of the first wave of the virus, clubs in numerous states, including Massachusetts, introduced safety measures such as no checking at the younger levels, physical distancing in locker rooms, and masks for the two players doing the faceoffs.

Massachusetts Hockey President Bob Joyce said families who didn’t like those new rules took their children to play in neighboring states with fewer restrictions. And sometimes those players played on multiple teams or had siblings who did and went to school, creating very large social networks.

“It was a wake-up call,” Joyce said. He said state officials estimated that those 108 initial hockey cases amounted to 3,000 to 4,000 others potentially exposed.

In an October report, the CDC detailed a large outbreak in Florida among amateur adult hockey players on two teams that played each other but had no other contact. Investigators speculated that the indoor space and close contact increased the infection risk. They also pointed out that ice hockey “involves vigorous physical exertion accompanied by deep, heavy respiration, and during the game, players frequently move from the ice surface to the bench while still breathing heavily.”

Surrounded by plexiglass not only to prevent errant pucks but also to keep the airflow stable so the ice can remain cold, there’s little ventilation and humidity by design in ice rinks. The surface of the ice is kept around 20 degrees Fahrenheit; the ambient air temperature, in the 50s. The Department of Homeland Security has shown in lab experiments that the virus may live at those temperatures up to two times longer in the air. At 86 degrees, for example, 99 percent of the airborne virus is estimated to decay in 52 minutes. But at 50 degrees, it would take 109 minutes.

William Bahnfleth, a professor of architectural engineering at Penn State University, said there is growing evidence that humidity may play an important role. In higher humidity, the virus attaches to bigger droplets that drop faster to the ground, decreasing the chance that someone will inhale them. The drier the air, the faster droplets will evaporate into smaller-size particles that stay in the air, increasing the concentration.

“There are some researchers have come to believe that humidification is the key above all,” he said.

Studies have shown that the virus doesn’t survive as long in the humid air, and that we’re more susceptible to viruses when the air is drier. Separately, epidemiological data from a long-term care facility has shown a correlation between lower humidity and higher infection rates.

Rubin, who is a pediatrician in addition to his public policy research job, said he worries those on the ice may be inhaling larger doses of the virus due to these environmental conditions, making it more likely they will become infected.

“It’s very hard to sort out, but you wonder if increased inoculum of the virus is an extra factor,” he said.

Demming expressed similar thoughts: “It could be infection rates are common across sports, but in a sport like hockey where you are trapping more virus in the breathable air it could result in more severe infections that end up being symptomatic.”

The National Hockey League was able to complete its playoffs after players were put in a bubble where they were tested each day, administered symptom checks and temperature screenings. No cases were reported. But conducting such rigorous screening on the roughly 650,000 amateur players and officials in the United States is an impossible task.

In Vermont, an outbreak at a single ice rink ripped through the center of the state, affecting at least 20 towns in at least four counties, and seeding other outbreaks at several schools. By Oct. 30, when Vermont Gov. Phil Scott (R) detailed the outbreak at a press briefing, 473 contacts had been associated with it.

“One case,” Scott emphasized, “can turn one event into many.”

For Tyler Amburgey, a 29-year-old coach in Lavon, Texas, north of Dallas, the coronavirus started out like a cold. But then it soon progressed to a headache, fatigue and shortness of breath. Authorities later determined that the outbreak spanned several teams and 30 people. By the third day of his illness, Aug. 29, several of Amburgey’s players had tested positive, and he was so ill that he canceled hockey practice.

Later that day his wife found him in his bed, unresponsive, and called 911. His heart had stopped, relatives told media outlets, and paramedics were unable to revive him.

Continue Reading

Latest

Weekly unemployment claims still trending up

Avatar

Published

on

By

by Timothy McQuiston, Vermont Business Magazine Weekly unemployment claims fell last week after the previous week’s spike, but have been trending up consistently the last two months. After being near their lowest levels since the beginning of the pandemic, claims have increased beyond the usual seasonal slowdown. Claims fell 224 to 1,255 last week (up 131 from the same time last year).

As for the week’s ongoing jobless claims, for the week ending November 11, 2020, the Labor Department processed 11,337 claims, down 1,292 from the previous week and 7,237 more than the same time last year.

As for further comparison, initial Vermont claims for the week of March 21, 2020, were 3,784, up 3,125 from the week of March 14.

Labor Commissioner Michael Harrington said at Governor Scott’s media briefing Friday that he has a lot of concern for the end of CARES Act funding and therefore the pandemic unemployment benefits and extended benefits for UI filers that came with it.

The extra benefits will cease the week after Christmas for nearly all those filers. Like the governor, he is hopeful that Congress will come up with what Scott called “bridge” funding for these programs until the Biden Administration and the new Congress can come up with a new CARES Act type funding plan. There does appear that some level of federal help will be forthcoming.

The governor is also hoping that funding includes budget relief for states, but he is less certain of that.

Harrington added that there are still some appeals and adjudications continuing regarding those pandemic benefits and that otherwise nearly all of the last of the emergency unemployment Lost Wages Assistance money has been distributed. The LWA was the last and smallest of the unemployment benefit programs.

The federal government portion of extra benefits, which is nearly all of the pandemic funding, must meet strict guidelines and there is very little the state can do to mitigate an issue.

The total number of unemployed is about 20,000, including the extra PUA claimants, which is down from the peak last spring of over 80,000 Vermonters getting some type of unemployment insurance.

There is recent discussion in Congress that a plan could be enacted during the “lame duck” session, but more likely after President-elect Biden is inaugurated.

Meanwhile, the state unemployment rate, which was the lowest in the nation before the pandemic, then spiked during the pandemic, has retreated and is now second lowest in the nation.

However, the VDOL points out that the US Census modeling has not caught up with the reality of the pandemic and Vermont’s 3.2 percent unemployment rate likely portrays a rosier economic picture than what actually exists.

Labor Commissioner Harrington said in late November that the real unemployment rate is more in the 5 percent range, and if it included the PUA, the rate is likely more in the 6-8 percent range.

He and Scott said that while the data the US Census collects is not erroneous, they disagree with the methodology the federal government is using given the altered behavior of people during the pandemic.

They said people have left the workforce for reasons related to the pandemic, like for personal safety or childcare, which then lowers the total Labor force, which works as the denominator in the calculations, thus lowering the unemployment rate.

Per federal rule, this ultimately decreases the ability of the state to offer extended UI benefits, as they were able earlier in the year.

Governor Scott said the state has been in contact with Vermont’s congressional delegation on trying to change the formula the US Census Bureau uses to determine the state’s unemployment rate.

There are also over 8,000 Vermonters on Pandemic Unemployment Assistance (sole proprietors/self employed etc).

The PUA claims are not included in the unemployment rate calculation.

Harrington also addressed issues faced by the self-employed in collecting benefits.

If SPs did not file their tax returns by a certain time they missed out on some benefits. Harrington said this is a federal government rule. The state was allowed a 21-day grace period, but cases are still being adjudicated.

Also, another issue has been when a self-employed person received even one dollar of regular UI benefits, they are disallowed, again by federal rule Harrington said, from receiving any PUA.

For instance, some people who work for themselves also carry a part-time job. If they got laid off from that job and received any UI payments, then they’re stuck on the UI side and cannot get PUA.

The PUA benefits in some cases are more advantageous; for instance they will last through the end of this year. PUA claimants also can get partial payments even if they have some income.

What a new PUA looks like is unclear until and if one is signed into law. But it appears as of now that it might not include new filers after a certain time.

Scott has also extended his Emergency Order until December 15. He has said that he will continue to extend the Order as long as necessary and that we are “only half-way through” the impact of the novel coronavirus.

Also, the $1.25 billion CARES Act federal funds have all been allocated, though some budgetary shifting could still occur. The money must be spent by the end of December.

Also, the additional $600 in weekly benefits from the federal government for all unemployment programs ended July 25.

The PUA program, which is full funded by the federal government and is intended for non-regular UI workers, will last until the end of the year. They will receive regular benefits (but, again, not the extra $600).

“That $600 is concerning. I know a lot of families are counting on that to cover a lot of their expenses,” Scott said over the summer.

After a spike of claims at the beginning of the pandemic, followed by a steep decline as the economy began to reopen in April, initial unemployment claims fell consistently since the beginning of July before flattening over the last couple months.

Claims hit their peak in early April. At that point, Governor Scott’s “Stay Home” order resulted in the closing of schools, restaurants, construction and more, while many other industries cut back operations.

Over $500 million of federal money has been added to Vermont unemployment checks so far.

Since March 1, over 80,000 new claims have been filed in Vermont when including PUA.

The official Vermont March unemployment rate was 3.1 percent, but the April rate was 15.6 percent, which is the highest on record. The Vermont unemployment rate in May fell to 12.7 percent.

The US rate fell to 7.9 percent in September from 8.4 percent in August from 10.2 percent in July from 11.1 percent in June and in May from 13.3 percent. The US April rate was 14.7 percent, the highest rate since its was first calculated in 1948 and the highest unofficially since the Great Depression of about 25 percent.

Nationwide, according to the US Labor Department for the week ending November 28, initial claims for state unemployment benefits totaled 712,000 last week, which was the lowest since the beginning of the pandemic and down from 787,000 the week before and 742,000 the week before that.

Claims generally have been falling since the early weeks of the pandemic in March. Early on in the pandemic, US claims reached 5.2 million and 6.6 million claims. Just prior to the steep job loss, there were 282,000 claims on March 14.

US GDP had its worst quarter on record as it fell 32.9 percent in the second quarter; the next worst was in 1921.

The Pandemic Unemployment Assistance (PUA) has added to the ranks of those receiving benefits, but is not counted in the official unemployment rate. The PUA serves the self-employed who previously did not qualify to receive UI benefits and might still be working to some extent.

This surge during the Great Recession for the entire year in 2009 spiked at 38,081 claims.

The claims back in 2009 pushed the state’s Unemployment Insurance Trust Fund into deficit and required the state to borrow money from the federal government to cover claims.

Right now (see data below), Vermont has $252.2 million in its Trust Fund and saw the fund decrease by a net of $3.3 million last week. Payments lag claims typically by a week. Balance as of March 1 was $506,157,247.

Vermont at the beginning of the pandemic had more than double the UI Trust Fund it did when the economy started to slide in 2007. It went into deficit and the state had to borrow money from the federal government to pay claims. Some states like California are already in UI deficit because of the COVID crisis.

Scott said the UI fund is not expected to run out under current projections.

“We are in a much healthier position than many other states,” Labor Commissioner Harrington has said.

Given the Trust Fund’s strong performance and the burden of unemployment taxes on employers, Governor Scott reduced the UI tax on businesses. He also announced that starting the first week of July, the maximum unemployment benefit to workers will increase about $20 a week.

While the UI Trust Fund will not fall into deficit under current trends, the governor has acknowledged that they simply cannot predict it given how economic conditions could swing if there is a second surge of COVID-19.

Still, he’s moving forward with the UI changes now because the burden on employers and employees is now.

Stories:

Vermont’s unemployment rate falls to 3.2 percent in October

Over $100 million in recovery grants awarded, still more available

Businesses to see double-digit rate decrease in workers’ comp insurance in 2020

Tax revenues finish year nearly $60 million above targets

UI tax rates for employers fell again on July 1, 2018, as claims continue to be lower than previous projections. Individual employers’ reduced taxable wage rates will vary according to their experience rating; however, the rate reduction will lower the highest UI tax rate from 7.7 percent to 6.5 percent. The lowest UI tax rate will see a reduction from 1.1 percent to 0.8 percent.

Also effective July 1, 2018, the maximum weekly unemployment benefit will be indexed upwards to 57% of the average weekly wage. The current maximum weekly benefit amount is $466, which will increase to $498. Both changes are directly tied to the change in the Tax Rate Schedule.

The Vermont Department of Labor announced Thursday, October 1, 2020 an increase to the State’s minimum wage. Beginning January 1, 2021, the State’s minimum wage will increase $0.79, from $10.96 to $11.75 per hour. The calculation for this increase is in accordance with Act 86 of the 2019 Vermont General Assembly.

This adjustment also impacts the minimum wage of “tipped employees.” The Basic Tipped Wage Rate for service or tipped employees equals 50% of the full minimum wage or $5.88 per hour starting January 1, 2021.

The Vermont Department of Labor has announced that the state is set to trigger off of the High Extended Benefits program, as of October 10, 2020. This determination by the US Department of Labor follows the recent announcement of Vermont’s unemployment rate decreasing from 8.3% in July to 4.8% in August.

Vermont’s minimum wage rose to $10.78 on January 1, 2019.

The Unemployment Weekly Report can be found at: http://www.vtlmi.info/. Previously released Unemployment Weekly Reports and other UI reports can be found at: http://www.vtlmi.info/lmipub.htm#uc

NOTE: Employment (nonfarm payroll) – A count of all persons who worked full- or part-time or received pay from a nonagricultural employer for any part of the pay period which included the 12th of the month. Because this count comes from a survey of employers, persons who work for two different companies would be counted twice. Therefore, nonfarm payroll employment is really a count of the number of jobs, rather than the number of persons employed. Persons may receive pay from a job if they are temporarily absent due to illness, bad weather, vacation, or labor-management dispute. This count is based on where the jobs are located, regardless of where the workers reside, and is therefore sometimes referred to as employment “by place of work.” Nonfarm payroll employment data are collected and compiled based on the Current Employment Statistics (CES) survey, conducted by the Vermont Department of Labor. This count was formerly referred to as nonagricultural wage and salary employment.

UI claims by industry last week in Vermont are similar in percentage to those from a year ago, though of course much higher in number in each industrial category.

Continue Reading

Latest

Live updates: Walz urges Minnesotans to apply for COVID-19 housing assistance before Monday deadline

Emily walpole

Published

on

Here are the latest updates on COVID-19 cases, deaths and hospitalizations in Minnesota and Wisconsin.

ST PAUL, Minn. — Thursday, Dec. 3

  • MDH reported 92 COVID deaths on Thursday, the second highest in a single day
  • Minnesotans have until Dec. 7 at 11:59 p.m. to request housing assistance
  • MSHSL sets tentative schedule for winter sports, depending on Gov. Tim Walz order
  • Hospital bed use down across Minnesota
  • Officials say we are at the endgame of the pandemic with upcoming vaccines
  • Experts concerned about possible surge after Thanksgiving travel, gatherings

Gov. Tim Walz and Lt. Gov. Peggy Flanagan are urging Minnesotans to draw upon state aid for their end-of-year housing bills.

In a media call at 1 p.m. Gov. Walz highlighted efforts to “ensure Minnesotans can afford to stay in their homes during the COVID-19 pandemic.”

Minnesotans can apply for housing assistance through the United Way by calling 211. The deadline is Monday. Dec. 7 at 11:59 p.m.

Walz pointed out that Minnesota is still in the heart of the pandemic, with the second-highest daily death toll of 92 announced on Thursday.

“Throughout this entire epidemic we’ve asked Minnesotans to sacrifice,” Walz said. “We’ve asked them to do things that put their own financial security somewhat at risk, to help protect others.”

The governor said he understands that some people don’t have a safe place to go, or they’re in danger of losing that safe place, when they’re asked to stay home.

“A lot of folks are in a situation where housing security is a real concern through no fault of their own,” Walz said.

Lt. Gov. Flanagan said she is a renter and paid her rent on Tuesday. But she knows that some Minnesotans are deciding between paying their rent or mortgage, and buying groceries.

“I want folks to know that there are still resources available to help you and your family,” she said.

Flanagan said home owners should ask their lenders if they can defer payment for up to a year. And anyone can apply for housing assistance via 211unitedway.org, or by calling 211, before the deadline of Monday, Dec. 7 at 11:59 p.m.

Those who don’t need assistance should consider giving to the nonprofits that are helping others, Flanagan said, and telling their friends and family about the assistance that’s available.

“We cannot stop until all Minnesotans have a safe and affordable place to live,” Flanagan said.

Emily Bastian, vice president of ending homelessness at Minnesota nonprofit Avivo, spoke about efforts to support the people living in homeless encampments in the Twin Cities.

“There is no one path from homelessness to permanent housing,” she said.

Bastian emphasized the importance of state and local governments partnering with the nonprofit sector to make that support possible.

Gov. Walz said it’s important to recognize the humanity in those experiencing homelessness, “not seeing it as a problem that we wish would just go away.”

The governor also said that the last week has given him hope that there will be a federal COVID-19 relief package.

There’s $100 million available in Minnesota’s Housing Assistance Program, which was announced in July. Minnesota Housing Commissioner Jennifer Ho said there are currently requests for $67 million in assistance as of the end of November. That means there’s a little over $30 million left to dole out, and she hopes many people will still request assistance with December rent.

“We’ve got room for one more big push here to pay December bills,” she said.

Ho said that the reason the program is closing on Dec. 7 is so that state officials have time to go through all the applications, allocate funds, and then potentially reallocate any leftover money.

COVID-19 is continuing to take a significant number of lives in Minnesota, with 92 new fatalities reported by state health officials on Thursday

Those deaths are the second highest single-day total since the pandemic began, only behind the 101 deaths reported the Friday after Thanksgiving. The total number of lives lost in the state now sits at 3,784. Thursday’s near-record comes just one day after the third-highest daily death toll of 77.

The Minnesota Department of Health (MDH) says 6,166 new coronavirus cases were reported Thursday, based on results from 50,718 tests (45,885 PCR, 4,833 antigen) processed in private and state labs.

A positive PCR test is considered a confirmed case, while a positive antigen test is considered probable.

Minnesota now reports 333,626 COVID-19 cases since the start of the pandemic.

Hospitalizations due to the coronavirus in Minnesota are continuing a downward trend. COVID-19 patients are currently using 1,394 non-ICU beds across the state – 29 fewer than the day prior, and 376 ICU beds – nine fewer than the previous day. Metro bed availability has improved from 1.9% to 2.3%, and ICU bed availability in the metro has grown from 4.5% to 5.7%.

The total number of patients hospitalized since COVID hit Minnesota is 17,623, with 3,911 of those requiring treatment in the ICU.

COVID-19 case rates now put 86 of 87 Minnesota counties under full distance learning recommendations from MDH, although community spread is only one factor of many schools are instructed to use to determine their learning model.

Leading causes of exposure for those who have tested positive include community exposure with no known contact (62,312 cases) followed by a known contact (55,953 cases) and exposure through a congregate care setting (26,100 cases).

Young people 20 to 24 make up the largest group of cases with 35,289 and two deaths, followed by those 25 to 29 with 30,360 and four deaths. The greatest number of fatalities involves people 85 to 89 with 712 in 4,244 confirmed cases.

Hennepin County has the most recorded COVID activity with 70,069 cases and 1,145 deaths, followed by Ramsey County with 29,459 cases and 521 deaths, Dakota County with 23,564 cases and 198 deaths and Anoka County with 23,541 cases and 236 fatalities.

Cook County in northeastern Minnesota has the least amount of COVID activity with 80 cases and no deaths.

On Wednesday, Governor Tim Walz, Department of Public Safety Commissioner John Harrington and several first responders spoke to Minnesotans to address the way the COVID-19 pandemic has impacted public safety and emergency response.

Walz said that he hopes to highlight aspects of everyday life that are impacted by the pandemic that many Minnesotans may not typically  consider. According to Walz, the workforce of firefighters, police officers and paramedics in Minnesota has been affected by COVID-19, which can impact their ability to respond to emergencies.

Harrington emphasized that this is a statewide issue, and that he is hearing every day from fire departments and police departments that are having staffing issues due to COVID-19.

He added that fire departments have been hit particularly hard.

“Ninety-nine out of the 500 fire departments in the state of Minnesota have had major COVID outbreaks,” he said. “That’s 20%.”

He stressed that the state has worked to rearrange resources and take precautions to keep departments staffed, but it won’t take much to take those departments out of service if communities do not wear masks, avoid gatherings and social distance.

Eagan Police Chief Roger New said that his department has followed CDC guidelines since the pandemic began, but he has still seen 20% of his staff take time off due to COVID-19 quarantines at some point since March, including one staff member who was hospitalized and took two months to fully recover.

Jay Wood, a firefighter in Plato, said that the Plato Fire Department has also carefully followed guidelines, but an outbreak that affected over three quarters of the department forced them to take the department out of service for a time.

“We are not alone as a small department of dealing with the virus and the staffing issues it has presented to us,” he said. “Minnesota fire services are always here to help the public, and people always ask how they can help us. The biggest thing you can do is follow the guidelines the governor and the Department of Health have set for us.”

Paramedic Ross Chavez echoed this, urging Minnesotans to follow advice from health experts to help keep first responders in the community healthy so they can continue providing fast and effective emergency services.

“Please, help my colleagues and me be there for those who need us, especially this holiday season during these trying times,” Chavez said.

Walz said that for Minnesotans frustrated by other community members not following these guidelines, he does not want to shame anyone, but it is a “moral hazard” to not wear a mask and go to large gatherings.

“We’re not going to be able to arrest everybody, that was certainly never our intention,” he said. “You don’t have to follow these rules because I said so, you don’t have to follow them because you don’t like government. You should follow them because they’re the right thing to do, they protect lives.”

Walz added that by next Tuesday, he hopes he and state health officials will have a clear timeline for a COVID-19 vaccine in the U.S. Minnesota Department of Health (MDH) Commissioner Jan Malcolm said she expects the FDA will issue an emergency use authorization on Dec. 11, and that the first wave of vaccinations could begin as soon as a week or so later.

Walz said he understands concerns around safety of the vaccine, but his assessment has been that the federal government has done a “fantastic job” of the vaccine development.

However, he stressed that though the excitement around the vaccine may indicate that the pandemic is over, we are still “in the teeth of it.”

“Let’s make sure we get all of our neighbors there, and protect those folks that make a difference,” he said.

The resurgence of COVID-19 in Minnesota is proving deadly, as underscored by 77 new fatalities reported by state health officials Wednesday.

Those deaths are the second highest single-day total since the pandemic came to Minnesota, only behind the 101 deaths reported the Friday after Thanksgiving. The total number of lives lost in the state now sits at 3,692.

The Minnesota Department of Health (MDH) says 5,192 new coronavirus cases were reported Wednesday, based on results from 42,737 tests (39,912 PCR, 2,825 Antigen) processed in private and state labs.

A positive PCR test is considered a confirmed case, while a positive Antigen test is considered probable.

Minnesota now reports 327,477 COVID-19 cases since the start of the pandemic.

In a bit of positive news, hospital bed use is down after a surge in recent days. Coronavirus patients are currently using 1,350 non-ICU beds, down 104 from Tuesday, and 354 ICU beds across the state are being used for COVID patients, down 40 from a day ago.

The total number of patients hospitalized since COVID hit Minnesota is 17,378, with 3,873 of those requiring treatment in the ICU.

Leading causes of exposure for those who have tested positive include community exposure with no known contact (60,808 cases) followed by a known contact (54,554 cases) and exposure through a congregate care setting (25,695 cases).

Young people 20 to 24 make up the largest group of cases by a significant margin with 34,806 and two deaths, followed by those 25 to 29 with 29,876 and four deaths. The greatest number of fatalities involves people 85 to 89 with 691 in 4,156 confirmed cases.

Hennepin County has the most recorded COVID activity with 68,898 cases and 1,130 deaths, followed by Ramsey County with 28,948 cases and 512 deaths, Anoka County with 23,196 cases and 232 fatalities, and Dakota County with 23,102 cases and 194 deaths.

Cook County in northeastern Minnesota has the least amount of COVID activity with 79 cases and no deaths.

Continue Reading

US Election

US Election Remaining

Advertisement

Trending