New around here? Register your SA Forums Account here!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Quackles
Aug 11, 2018

Pixels of Light.



ryanrs posted:

The SX1276 can ID in morse code if you switch it to FSK modulation.


Do techie vikings use Norse Code?

Adbot
ADBOT LOVES YOU

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

ryanrs posted:

The unlicensed jungle of 915 MHz ISM!

Though I also have a bunch of pin-compatible radio modules for 70cm. The SX1276 can ID in morse code if you switch it to FSK modulation.

are you allowed to use encryption on that band

Ihmemies
Oct 6, 2012

I did 04 silver also without regex, by iterating over the indexes. With optimizations I could barely get it to run as fast as regexp. Seems regex FTW.

vanity slug
Jul 20, 2010

i wanna solve 04 gold with just regex

ExcessBLarg!
Aug 31, 2001

ryanrs posted:

Yeah, that's right: I'm flipping the AES primitives around, using AES ECB decrypt to produce cipher text, and AES ECB encrypt to decrypt it. I think this is ok for AES, and maybe for all secure block ciphers? What's the name of this property so I can google it?
I don't know if there's a specific name for the property. In general, you're probably correct that any keyed psuedorandom permutation (PRP) that's sufficiently secure for use as a block cipher is equally secure with its inverse PRP, but that's probably more of a desired property and consequence of the obvious means of construction, rather than an actual requirement. Personally I wouldn't rely on it unless it's a defined mode of operation.

The one notable example I can think of is 3DES, which is defined as three rounds of DES: encrypt (K1), decrypt (K2), and encrypt again (K3). Apparently it was found that using the decrypt function in round 2 provided greater security with 112-bit keys (where K1 = K3).

The only issue with AES is that decrypt can be a slower operation than encrypt since the key schedule has to be used in reverse, and on a memory-constrained microcontroller or in a hardware implementation, that may require recomputing the entire key schedule each round. This may be the reason why your micro doesn't support hardware decryption (in addition to not being necessary for commonly-used modes these days).

ryanrs posted:

Running AES backwards suits my application because it is very decrypt-heavy (since it doesn't know who sent the packet before decrypting it, so many trial decryptions that fail).
So this seems, fine, but why do you really want to use ECB anyways? It has the really obvious flaw that you're going to keep sending the same ciphertext whenever your sensor updates don't have updated payloads--unless, I guess, you simply don't send them at all.

Personally I'd just use CTR mode. If you don't like the idea of sending an obviously-sequential nonce with each packet you could use a second AES CTR instance to generate CSPR nonces using a node-internal counter. It would double the number of AES operations you have to do for encryption (in hardware!), but uses the same AES CTR decryption mode.

ExcessBLarg! fucked around with this message at 15:23 on Dec 4, 2024

Ihmemies
Oct 6, 2012

vanity slug posted:

i wanna solve 04 gold with just regex

How do you do 2D regex? I considered creating substrings of 9 from each index and joining them with space, but that would take 9x more memory. Oh no..

ryanrs
Jul 12, 2011

ExcessBLarg! posted:

So this seems, fine, but why do you really want to use ECB anyways? It has the really obvious flaw that you're going to keep sending the same ciphertext whenever your sensor updates don't have updated payloads--unless, I guess, you simply don't send them at all.

Personally I'd just use CTR mode. If you don't like the idea of sending an obviously-sequential nonce with each packet you could use a second AES CTR instance to generate CSPR nonces using a node-internal counter. It would double the number of AES operations you have to do for encryption (in hardware!), but uses the same AES CTR decryption mode.

My code isn't straight ECB, but uses the ECB primitive. But I think I am using ECB in a broadly similar way that CTR uses ECB. Specifically, my 16-byte plaintext blocks never repeat. Each block is guaranteed unique (for the duration of the session key).

I'm trying to keep these update packets as small as possible, and independently decrypt-able (since there is ~10% packet loss). I think 16 bytes is as small as I can go? If I go smaller, besides issues with block size, there are big problems distinguishing failed decryptions or even just noise. I don't need the certainty of a 16-byte HMAC, but a 4 byte magic number is probably not enough.

I can live with 1 byte of sensor data, although my current code sends 4 bytes. The other 12 bytes have other header fields, including a 32-bit sequence number. I create a new AES session key every boot (and occasionally during runtime), so the sequence numbers will not repeat with the same key.

The benefits of this weirdo mode disappear once you are sending more than 1 or 2 blocks per packet. At that point just use CTR or some other standard mode.

ryanrs
Jul 12, 2011

Captain Foo posted:

are you allowed to use encryption on that band

It would be rude to poo poo up 70cm, so I'd probably do my business on 460 MHz.

redleader
Aug 18, 2005

Engage according to operational parameters

Captain Foo posted:

are you allowed to use encryption on that band

encryption? why no, officer, this is just powerful narrowband noise

root of all eval
Dec 28, 2002

All this unlicensed radio band chat is making me wanna watch pump up the volume

ryanrs
Jul 12, 2011

ryanrs posted:

Example update packet, which should be short:
code:
// 16 bytes
struct Update {
	uint32_t uptime;       // in seconds
	uint32_t sequence;     // simple counter
	uint16_t battery;      // millivolts
	uint16_t temperature;  // 0.01 C
	uint32_t sensor_data;  // opaque
};
Encode with AES ECB and send it.

On the receive side, decode it and check if the fields make sense compared to the last successfully decoded packet. Is the uptime correct, +/- a seond? Is the sequence number greater than the last, but not too big a jump? Etc.

Conceptually, each field contributes some bits towards information, and some bits for verification. For example, since you know the node's uptime from previous packets, that field might contribute 2 bits of real info (the +/- 1 sec uncertainty), and the other 30 bits are verification. Most random or malicious changes will mangle the uptime field so much that the packet will fail verification.

I estimate the various fields to encode this much real information:
1 bit uptime
3 bits sequence
4 bits battery voltage
2 bits temperature

I will set the acceptance thresholds wider than that, esp for the battery voltage and temperature fields. But still that gives you more than 48 bits of 'verification' from just the uptime and sequence number fields. This seems totally fine for my application. I don't think I need the security of a 16-byte HMAC.

Notable caveat: the central receiver will have to attempt a decode with each node's AES key, because it doesn't know who sent the packet (no source address). This is fine for the small number of end nodes in my system.

Does this simple system have any fatal flaws? Can it be improved without adding dozens of bytes to the payload? Doubling the packet size comes with a 30% hit to battery life.


To elaborate on the 'verification bits' idea, below is my packet acceptance code. It rejects any packet with implausible values.

code:
        -- accept 7 out of 2^32
        local boot_diff = (time.uptime() - uptime) - S.BootTime
        if math.abs(boot_diff) > 3 then
            print("  boot_diff=" .. boot_diff .. " REJECT")
            goto continue
        end

        -- accept 101 out of 2^32
        local seq_diff = seq - S.Sequence
        if seq_diff == 0 then
            print("  seq_diff=0 REJECT REPLAY")
            goto continue
        elseif seq_diff < 0 or seq_diff > 100 then
            print("  seq_diff=" .. seq_diff .. " REJECT")
            goto continue
        end

        -- accept 6001 out of 2^16
        batt = decode_batt(batt)
        if batt > 6.0 then
            print("  batt=" .. batt .. " REJECT")
            goto continue
        end

        -- accept 30001 out of 2^16
        temp = decode_temp(temp)
        if temp < -100 or temp > 200 then
            print("  temp=" .. temp .. " REJECT")
            goto continue
        end

        print("ACCEPT")
        print_hex(sensor)
        break

        ::continue::
code:
>>> (7/2**32) * (101/2**32) * (6001/65536) * (30001/65536)
1.606568279102565e-18
>>> math.log(1.606568279102565e-18)/math.log(2)
-59.110723411365264
If we assume that any change in the cipher text will totally change the plaintext, then there is a 1 in 2^59 chance that my code will mistakenly accept a forged or corrupt packet as authentic. That's nowhere near the certainty of a real 16-byte HMAC, but 1 in 622 quadrillion is sufficient for my hobby needs.

The question is, can an adversary manipulate a ciphertext to preferentially affect certain bits in the plaintext? For example, given a valid encrypted packet copied off the airwaves, can an attacker modify the ciphertext in such a way to affect only the last 4 bytes in a block, and not the first 12? Since my algorithm only verifies certain bits, not a full hash of the message, such an attack could target my information bits and avoid the verification bits. Mess up the sensor value without spoiling the sequence number.

I know good block ciphers try to maximize the avalanche effect. And I think if the above approach was practical, it would also break AES CTR, since CTR usually only flips a bit or two of the AES input when the counter increments.

Note that CTR mode does not create this error avalanche, at all, because it's an XOR. You absolutely need a separate HMAC with CTR mode, since an adversary can trivially flip a plaintext bit just by flipping the corresponding ciphertext bit. It's a 1-to-1 mapping, with no scrambling at all. My code really, really needs that scrambling.

Quackles
Aug 11, 2018

Pixels of Light.



What's your threat model again?

Soricidus
Oct 20, 2010
freedom-hating statist shill

ryanrs posted:

The question is, can an adversary manipulate a ciphertext to preferentially affect certain bits in the plaintext?

the answer to this question in isolation is “no”. there is no way to do this unless you have the key.

your system as described is adequately secure in that no attacker, however sophisticated and however much they love math, is going to start by breaking the encryption.

ExcessBLarg!
Aug 31, 2001

ryanrs posted:

The question is, can an adversary manipulate a ciphertext to preferentially affect certain bits in the plaintext?
This is one of those situations where someone probably took the Rijndael/AES algorithm, reduced the number of rounds in it, and wrote a paper about the extent to which they were able carry out an attack like this. I'll leave it as an exercise to the reader to find said paper.

But really, the conclusion over the past two decades is that if your thread model includes potential adversarial manipulation of the ciphertext, you need AE/a MAC, and (as you mentioned), having such in place is what makes CTR mode/XOR stream ciphers acceptable.

So why can't you do a MAC again? It doubles the size of your update packets? Lack of compute?

ryanrs posted:

I know good block ciphers try to maximize the avalanche effect. And I think if the above approach was practical, it would also break AES CTR, since CTR usually only flips a bit or two of the AES input when the counter increments.
So the thing about AES CTR is that in a model where an attacker can't manipulate ciphertext, but you might still experience erasures due to the underlying media, then only the affected bits of the ciphertext will also affect the plaintext, which may be a desirable property with things like encrypted AV streams where such losses are tolerable.

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER


Quackles posted:

What's your threat model again?

Joe in accounting who's 60 and Matt the 19 year old intern in marketing

ryanrs
Jul 12, 2011

What ciphers are used commercially for very short packets, like 10 or 20 bytes including overhead?

I knew about KeeLoq, the (bad) cipher used in automotive keyless entry systems. Now that I'm reading through the details, I see many eerie similarities to my scheme.

Microchip: Introduction to Ultimate KEELOQ Technology

What other publicly-described systems can I read about?

ryanrs
Jul 12, 2011

General-purpose RPC server running on the sensor nodes, lol.

code:
while true do
    local request = decrypt(radio.receive())
    if request ~= nil then
        local reply = eval(request)
        radio.transmit(encrypt(reply))
    end
end
Running the radio with interactive latency will cut the battery life by a lot. But that's still weeks of runtime, and very convenient for development.

I'll probably have the nodes check for activity every few minutes, and if they see a paging signal, go into interactive mode, with a multi-hour inactivity timeout.

Power usage is 2 mA @ 3.7V, running a 64 MHz Cortex-M4.

Sapozhnik
Jan 2, 2005

Nap Ghost
TDMA is the usual MAC for these sorts of simple star topology sensor applications isn't it?

ryanrs
Jul 12, 2011

Because of my power budget and small number of nodes, there's not much point in thinking about contention. Just pretend the channel is always free.

Sapozhnik
Jan 2, 2005

Nap Ghost
Yeah, that's the point. Sync to the base station beacon packet and you can crank down your node radio's duty cycle as low as you want. Wake up just long enough to receive the scheduled beacon and then wake up again for your TDMA slot if the beacon says you have a message waiting (or you have a message to transmit).

It's easier if your radio has a dedicated enable pin that you can hook up to a timer on the MCU but it's still possible without.

ryanrs
Jul 12, 2011

I'm not sure what the beacon gets me. Instead of sending a beacon packet, just send the packet.

Quackles
Aug 11, 2018

Pixels of Light.



you've weighed the beapros and beacons

ryanrs
Jul 12, 2011

There is a state machine that controls the radio (written in Lua). It can tune the radio, check for a carrier, receive packets, and transmit packets. If you want, you can monitor several frequencies, and if you're fast about it, catch a packet being sent on any one.

You just need to configure the transmitter to send a giant preamble that is longer than the time it takes for your receiver to go through its entire channel list. Kinda gross if you get carried away with it, though.

root of all eval
Dec 28, 2002

Quackles posted:

you've weighed the beapros and beacons

Sapozhnik
Jan 2, 2005

Nap Ghost
Maybe I'm misunderstanding your use case then. I'm assuming your base station has "sufficient" power available and can keep its radio on constantly, your nodes are battery limited, and every node can directly talk to every other node (or at the very least every node can talk directly to the base station).

Back when I did wireless sensor projects, the radios on the nodes would consume significant power if they were constantly receiving, so the idea is to run the receiver on a low duty cycle. Transmit a beacon packet at regular intervals, say 10ms, then once a node receives its first beacon it synchronizes a timer to that beacon and switches on its radio just long enough to receive the beacon at the expected time. Then each node has its own TX and RX time slots allocated in the time interval (frame) between beacons, so you don't get collisions. If you have something to report to the base station then you schedule the radio to transmit during your slot. If the base station has something to say to you then it will announce that in the beacon's contents, so you'll switch on your receiver for the duration of your receive slot as well.

That way your radio spends most of its time asleep and therefore reduces its power consumption, while still exhibiting reasonably low latency if your frames are short enough.

Now, this was just a masters project but it's not a particularly complicated MAC, I'm sure real products aren't that much more complicated.

ryanrs
Jul 12, 2011

I'm doing a star network, where the nodes only talk to the base.

Rough numbers:
10ms to check for channel activity (DSP math to detect below the noise floor)
100ms to receive a minimal 16-byte packet
1,000ms for full size 200 byte payload + 40 byte crypto
244 bytes/sec raw radio rate

...and I just realized I didn't mention that the nodes each listen on a different frequency. Beacons would make a lot of sense if they were all on the same channel.

For the uplink, all nodes transmit on the same freq. The base runs its radio at high duty cycle, so nodes transmit with minimum preamble. If I had high channel utilization, collisions would be a problem. But my channel utilization should be <<1%.

ryanrs
Jul 12, 2011

Hmm, yeah if I had 100+ nodes, the current design would suck, esp with thundering herd issues on the uplink. And more broadly, I am not getting good performance in terms of samples/sec/mW across the network.

OTOH, 100 devices would require a hardware re-spin and contract manufacturing, at minimum. And a bigger microcontroller for the base, because Lua couldn't track a hundred nodes with the current RAM. So it's not as if a smarter protocol would actually let me deploy 100 nodes. (I don't want to manually recharge a hundred nodes, either.)

ryanrs
Jul 12, 2011

sheesh, I'm really leaning on "it's ok if it's bad and dumb because it's a hobby project" :(

Carthag Tuek
Oct 15, 2005

altid pamo når du går
veje du burd' kende
overleved' barneår
lig' til livets ende

ryanrs posted:

sheesh, I'm really leaning on "it's ok if it's bad and dumb because it's a hobby project" :(

well its true


we are gonna get some contractors to write us a new frontend instead of the 9yo angularjs hunk of coal, so ive been documenting the API & lol, its basically already a hack of an earlier system so im making real sure to write "do not use any fields in these json objects unless they are documented in this document" so it doesnt calcify further

ryanrs
Jul 12, 2011

I'm also seeing the limits of Lua. You can't shove nearly as much functionality in 256k of RAM with Lua as you can with C/C++. Maybe 1/10th? That's obviously a very hand-wavey number, but a 256k microcontroller is a big chip, yet it feels pretty cramped.

But hacking together interactive UI stuff and state machines with Lua coroutines is so much easier and more fun than writing a huge C event loop. It's a good tradeoff for my hobby projects, but for a commercial product, that performance/RAM/functionality/$ tradeoff is bad. It is better to have your devs spend more time beating themselves in C so you can ship a cheaper product with a smaller micro.

The thing that would swing the economics in favor of Lua, even for some commercial projects, is if microcontroller RAM was dominated by something else, something so big that Lua's memory inefficiency didn't matter. For example, graphics. Smartwatches that are too small for Linux, but have graphics, will need more RAM. Edge AI (if it's not bullshit) might also drive larger microcontroller RAM.

ryanrs
Jul 12, 2011

What I am getting out of this Lua framework is moving all my dumb little projects onto properly managed, shared codebase.

1. Nordic toy project (USB I/O device)
2. RP2040 toy project (4-key macropad)
3. Nordic complex project (this sensor network)
4. Nordic complex project (bear radar device)
...
7. LED sign project, not Lua.

The LED sign needs to be integrated into the MQTT network somehow, though the Teensy 3.2 microcontroller is too small to run Lua in addition to LED Shader Language (created for this one project as a joke, but turned out to be fun).


So that's the goal: all my dumb projects, with up-to-date code, alive and running, even if it's just displaying the time or announcing my mail.

ryanrs
Jul 12, 2011

kinda just realizing now how many of my projects are ever more elaborate ways of blinking leds

Sapozhnik
Jan 2, 2005

Nap Ghost

ryanrs posted:

But hacking together interactive UI stuff and state machines with Lua coroutines is so much easier and more fun than writing a huge C event loop

least unhinged rust fanboy rn:

gonadic io
Feb 16, 2011

>>=
that's literally lua's whole deal, is when C people were like "gui in C sucks so loving bad im making a new language about it"

ryanrs
Jul 12, 2011

I don't know enough about Rust to get it, sorry.

But do tell me about Rust. As a compiled language, it probably fits onto a microcontroller better than Lua, once you start writing a lot of code?

Lua is amazing for interfacing with terrible Arduino C++ code. How is Rust in this regard? I've heard Rust is designed for systems programming, but also that it was pedantic and bitchy about...borrowing things? I dunno!

ryanrs
Jul 12, 2011

gonadic io posted:

that's literally lua's whole deal, is when C people were like "gui in C sucks so loving bad im making a new language about it"

I also considered python, tcl, and javascript.

akadajet
Sep 14, 2003

ryanrs posted:

I don't know enough about Rust to get it, sorry.

But do tell me about Rust. As a compiled language, it probably fits onto a microcontroller better than Lua, once you start writing a lot of code?

Lua is amazing for interfacing with terrible Arduino C++ code. How is Rust in this regard? I've heard Rust is designed for systems programming, but also that it was pedantic and bitchy about...borrowing things? I dunno!

it’s not my job to educate you about rust

The Fool
Oct 16, 2003


missed scrivener opportunity imo

ryanrs
Jul 12, 2011

Another nice thing about Lua is that it's super easy to embed in a C++ program, even if you don't know any Lua at all. I know that sounds like an exaggeration, but it's not.

In fact, I still barely know Lua. I have to look up how to write a for loop when I need one. I just learned about the obj:func() syntax yesterday. Somehow it doesn't matter, and I find writing Lua is faster for me than C++.

Adbot
ADBOT LOVES YOU

gonadic io
Feb 16, 2011

>>=
it's literally the whole point of the language. it was created designed and implemented specifically to exist only to embed into c++ programs.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply