Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
gonadic io
Feb 16, 2011

>>=
How simple? If there's no (and can never be) any fields in any of the variants, then that's a job for the ol' integer.

Adbot
ADBOT LOVES YOU

gonadic io
Feb 16, 2011

>>=
Otherwise, if you don't want to go via string parsing as above, a more complex approach could be an integer determinant and a great number of optional columns, but at least the unsafe code only exists in one well-defined place.

lifg
Dec 4, 2000
<this tag left blank>
Muldoon

gonadic io posted:

How simple? If there's no (and can never be) any fields in any of the variants, then that's a job for the ol' integer.

Exactly that simple.

I haven’t used enum discriminants.

How do I convert from an integer back to the enum? Google is telling me I need to write a try_from function. Is that correct?

crazypenguin
Mar 9, 2005
nothing witty here, move along
Databases have enum columns types as well.

gonadic io
Feb 16, 2011

>>=

lifg posted:

Exactly that simple.

I haven’t used enum discriminants.

How do I convert from an integer back to the enum? Google is telling me I need to write a try_from function. Is that correct?

I've used I think this library before so I don't have to write out the cases myself: https://docs.rs/num-derive/latest/num_derive/derive.ToPrimitive.html

M31
Jun 12, 2012
I've been trying to write some Rust in my free time, and I guess I have to accept that I don't understand anything:

code:
struct Container {
    cache: std::collections::HashMap<u32, String>,
}

impl Container {
    pub fn get(&mut self) -> &str {
        let id = 42;
        let cache = &mut self.cache;
        if let Some(value) = cache.get(&id) {
            return value;
        }
        cache.insert(id, "foo".to_owned());
        return cache.get(&id).unwrap();
    }
}
this code has the borrow checker yelling at me that I'm making both an immutable and mutable borrow, but I don't understand why. Where am I going wrong? (I found I can use cache.entry().or_insert() which is very nice, but I would like to understand why this method doesn't work)

crazypenguin
Mar 9, 2005
nothing witty here, move along
The best place to start is with the compiler error message, and explain what you don't understand or why you think it shouldn't be complaining about that.

gonadic io
Feb 16, 2011

>>=
I feel like that should probably work if you just used self.cache everywhere instead of giving it a name
E: nope, I just tried and it doesn't, robostac is right

gonadic io fucked around with this message at 16:58 on Jul 4, 2023

robostac
Sep 23, 2009
The borrow checker isn't good enough to allow you to return references to the contents of a structure and modify a structure from the same function. On Current versions this needs to be implemented with Entry, but your code does work correctly with the next-generation borrow checker (polonius) that's available in nightly, but thats been in development for a while now and is still in progress.

Threep
Apr 1, 2006

It's kind of a long story.
As far as I can tell you're running into the common issue where if let has a funky scope and even though you're returning from inside it, its borrow lasts until the outer scope ends.

If that's the case one of the workarounds is to put the whole if let inside its own scope to force the borrow to be released.

One of my least favourite rust constructs is the suggested workaround if your if let returns an owned/Copy item:

code:
if let Some(num) = { let value = map.get(key); value } {
    return value;
}

Yes that temporary binding is necessary.

Threep fucked around with this message at 17:08 on Jul 4, 2023

gonadic io
Feb 16, 2011

>>=
I've been playing around in the playground and I can't get either of those to work here.

Love Stole the Day
Nov 4, 2012
Please give me free quality professional advice so I can be a baby about it and insult you
Here, this might be better for what you're trying to do:

code:
struct Container {
    cache: std::collections::HashMap<u32, String>,
}

impl Container {
    pub fn get(&mut self) -> &str {
        let id = 42;
        let default = "foo";
        let mut output = "";

        if self.cache.contains_key(&id) {
            output = self.cache.get(&id).unwrap();
        } else {
            self.cache.insert(id, String::from(default));
            output = default;
        }

        return output;
    }
}

crazypenguin
Mar 9, 2005
nothing witty here, move along
Fun Rust fact: it'll figure out initialization of a variable from a subsequent statement, even two branches of an `if`, so no dummy initializer is required (nor `mut`!!).

Rust code:
        let output : &str;

        if self.cache.contains_key(&id) {
            output = self.cache.get(&id).unwrap();
        } else {
            self.cache.insert(id, String::from(default));
            output = default;
        }

        return output;
But of course, we could just move the initialization expression to the RHS of the let = as well. But then we're just directly returning a let bound variable, so we could just... have that as the final expression.

Rust code:
    pub fn get(&mut self) -> &str {
        let id = 42;
        let default = "foo";
        if self.cache.contains_key(&id) {
            self.cache.get(&id).unwrap()
        } else {
            self.cache.insert(id, String::from(default));
            default
        }
    }
And then of course, we can use `entry`, but the original question was already aware of that approach. But I guess for completeness for anyone casually curious about Rust and reading the thread:

Rust code:
self.cache.entry(id).or_insert_with(|| default.to_owned())

kujeger
Feb 19, 2004

OH YES HA HA

lifg posted:

Exactly that simple.

I haven’t used enum discriminants.

How do I convert from an integer back to the enum? Google is telling me I need to write a try_from function. Is that correct?

There's also this if you already use serde for (de)serializing stuff
https://github.com/dtolnay/serde-repr

M31
Jun 12, 2012
Thanks everybody! I think I get it now, the first borrow is living longer than I expected due to a borrow checker limitation.

Love Stole the Day
Nov 4, 2012
Please give me free quality professional advice so I can be a baby about it and insult you
Today I learned you can yeet in Rust:



https://www.youtube.com/watch?v=BgCXrf_SG2E&t=110s

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

I posted this in yospos but realized this might be a better place for it --

I've been building part of my PhD work in Rust in order to learn the language in anger, and I hit a milestone last night: 10,000 lines of code! In a Gladwellian sense, I'm now a Rust expert :v:

Except that of course I do not feel like a Rust expert in the way that, say, 10,000 lines of Java might make me legitimately proficient in that language. (here is a representative Rust file, as an example).

For instance, I'm making heavy use of cloning to avoid the borrow checker yelling at me even though I'm _reasonably_ sure I could imagine using lifetimes (the project is a compiler and so most of the heavy lifting is tree-walking a program AST, so all the nodes in it will have the same lifetime). Also, there are plenty of data structures that require careful consideration in Rust that I could "just write" in a GCed language (or something where I explicitly control memory) - the above file contains, morally, a union-find data structure for a typechecking algorithm, but it got mutated into this weird "things get passed around as a `uint` which internally get looked up in a lookup table of actual values", which is insane to me but god if I could figure out how else to get it to work! Lots of little things, too - what's the right way to expose public things outside a module? Is it fine to just dump stuff in a mod.rs (kind of like a ML .mli file) or should that only be for use/pub mod directives? If I have a case where a function will either return an existing piece of data (which it could do by reference) in one code path and construct a new piece of data in another (which cannot be by reference), can I somehow do something smarter than cloning in the former case?

The problem is that I'm working on it on my own and I don't have a good barometer for Good Rust Idioms, beyond "as a C programmer, do I imagine that doing it this way would bite me later on" or "as a functional programmer, can I express the thing that I'd just write in OCaml without having to worry about fn vs Fn vs FnOnce vs FunMut", and there's nobody around to yell at me in my pull requests. That whole Ira Glass quote about having enough taste to know that you don't have good taste, etc.

I've thumbed through a few Rust books and the problem is that they tend to be geared towards either newbies to the language ("this is what an `if let` is, this is how pattern matching works, etc) or total newbies to programming, inexplicably ("this is a for loop"). What's a good intermediary Rust resource for learning good taste? I know Jon Gjengset was at some point working on a book that sounded like this but I guess he's fallen into the AWS employee black hole or somethin'.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Jon did his book, Rust for Rustaceans. It is probably what you want. Also Aria’s linked-list book and maybe even her blog posts.

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

Subjunctive posted:

Jon did his book, Rust for Rustaceans. It is probably what you want.
oh what the heck, how was I unable to find that it was actually published!

Thanks pal, this is getting put at the front of the queue

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I am late to the party, but I saw a lot of talk of stuff being "managers." That's always been a cringe sign for me that I don't know where my abstractions really are. I can't think of a better term but I know there's something in common, so it becomes a manager.

I still suck at the traits stuff and how abstraction is done in Rust, so I don't have any better thing to say about that. Even if I did, the situations are all contextual and independent of each other. I just wanted to flag a potential code smell.

Love Stole the Day
Nov 4, 2012
Please give me free quality professional advice so I can be a baby about it and insult you

Rocko Bonaparte posted:

I am late to the party, but I saw a lot of talk of stuff being "managers." That's always been a cringe sign for me that I don't know where my abstractions really are. I can't think of a better term but I know there's something in common, so it becomes a manager.

I still suck at the traits stuff and how abstraction is done in Rust, so I don't have any better thing to say about that. Even if I did, the situations are all contextual and independent of each other. I just wanted to flag a potential code smell.

Agreed! Naming things is hard

giogadi
Oct 27, 2009

Writing hella code and figuring out what abstractions/features/patterns you like and don’t like yourself is just about the best way to learn IMO. With that kind of experience you can read stuff and actually say “oh I’ve run into that problem” and more accurately judge and contextualize any advice you get from a book.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I am linting some of my first real code with Clippy and have some goofy stuff going on. Here's one paraphrased:

code:
                let mut builder = String::new();
                let bytes = CString::as_bytes_with_nul(&converted);

...

                for i in 0..use_size {
                    // Not sure why I have to use clone() here to appease clippy. I don't think I
                    // am borrowing the value.
                    builder.push(bytes[i].clone() as char);
                }
It was claiming I can't move bytes[i].

Edit: I guess it wasn't Clippy but some linting in CLion that decided to be goofy about it. Clippy actually hated cloning the value.

Rocko Bonaparte fucked around with this message at 22:46 on Aug 16, 2023

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I am looking for stuff--preferably text--on how to use the abstractions in Rust (generics, traits, whatever) in particular to make code easier to unit test with mocks. I'd like to swap out system code for fake stuff so I can verify all the intermediate handling logic I have to write.

Jo
Jan 24, 2005

:allears:
Soiled Meat
I'm kicking myself a bit for rewriting a big chunk of this and working myself into a corner.

I'm processing a stream of video data at ~120Hz. I have a preallocated ring buffer with space for N image captures. The buffers are also preallocated, so the camera thread is just going through, picking the next element in the ring buffer, and copying contents to that. Works pretty okay.

Or it would, if I didn't have to hand out a reference to elements in the ring buffer.

code:
	pub fn read_next_frame_blocking(&mut self) -> &image::RgbImage {
		// Important safety tip: WE DO NOT RELEASE THE REFERENCE TO THE PREVIOUS RGBIMAGE UNTIL THIS READ FRAME IS CALLED AGAIN.
		let rh = (self.ring_buffer_read_position.load(Ordering::Relaxed) + 1) % self.ring_buffer_size;
		self.ring_buffer_read_position.store(rh, Ordering::Relaxed);
		loop {
			if self.ring_buffer_write_position.load(Ordering::Relaxed) == rh {
				// Buffer is empty.
				thread::sleep(Duration::default());
			} else {
				break;
			}
		}
		let ring_buffer_read_lock = self.ring_buffer.read().expect("Unable to get read lock on ring buffer.");
		let image_ref = ring_buffer_read_lock.get(rh).expect("Failed to unwrap image at read head. This can't happen.");
		image_ref
	}
Before this I was just dumping stuff to a channel, but that involved an absurd number of allocations and there was great sadness.

I have four ideas for solutions but none of them seem very elegant:

1) Go back to the approach that used channels, but instead of just tx/rx have a tx/rx for sending images AND a tx/rx for 'returning' images which can get reused. Pretty elegant in theory and the lack of locking probably will be faster. Problem: having to return images after being done processing feels too much like manual memory management.
2) Pass a lambda into the read_next_frame_blocking. I rather like this, but I think using closures to do stuff could lead to more hardship in the future.
3) Have another fake image that I swap into the Vec in place of the actual image, then swap it back at return time. This feels risky and dumb.
4) Go back to the channel approach. Bummer to throw away the work and bummer to have the allocations, but it's maybe the nicest going forward.

I'm sure there's a better approach to all of this but I think maybe I've got tunnel vision and I'm just not seeing the obvious. Will probably sleep on it and try again tomorrow night, but anyone sees it that would be appreciated.

EDIT: I ended up with a variant of #1. I throw Arc<ImageData> into the pipe and use Arc::strong_count(i) to check if images are dropped elsewhere. It's not _as_ ergonomic as passing around just the refs, but I'm really happy with it.

Jo fucked around with this message at 06:57 on Sep 4, 2023

crazypenguin
Mar 9, 2005
nothing witty here, move along

Jo posted:

I have four ideas for solutions but none of them seem very elegant:

1) Go back to the approach that used channels, but instead of just tx/rx have a tx/rx for sending images AND a tx/rx for 'returning' images which can get reused. Pretty elegant in theory and the lack of locking probably will be faster. Problem: having to return images after being done processing feels too much like manual memory management.
2) Pass a lambda into the read_next_frame_blocking. I rather like this, but I think using closures to do stuff could lead to more hardship in the future.
3) Have another fake image that I swap into the Vec in place of the actual image, then swap it back at return time. This feels risky and dumb.
4) Go back to the channel approach. Bummer to throw away the work and bummer to have the allocations, but it's maybe the nicest going forward.

I just want to note that you never actually stated what the problem was. I assume that code didn't borrow check or work because of the returned reference lifetime, because that's what it sounds like from these solutions.

I agree (4) should be out, there *should* be a workable enough way to avoid the allocation here. I'm assuming the performance would be nice here. :)

All of 1,2,3 seem like reasonable choices. I don't think 3 should be considered that terrible, considering it's something like what std::mem::replace/take do, so it's sort of a "pattern" in Rust.

(2) is probably what I would have chosen. I'm curious why you think it'd be a problem in the future? This is the most like many other solutions to "I want a reference to it but you can't *return* a reference".

The other maybe-but-haven't-thought-it-through possibility is something like MutexGuard. Return not the reference but something that's basically a reference but now has a lifetime parameter that's correctly constrained in the sense of "lives longer than the function call, but shorter than self's lifetime and shorter than next frame or whatever".

(Also, bit sus for an atomic load/store. Even if purely single threaded I'd probably write a function to do that increment with a compare and swap, just to leave my mind at ease...)

(Also also, I assume there are existing ring buffer-based channel implementations, are you sure you need a custom one?)

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

A variation on #3 that you might like more would also be to have the elements of your ring buffer be an Option<image::RgbImage>, which makes the std::mem::take solution pretty straightforward (since taking ownership just replaces it with None)

crazypenguin posted:

(Also, bit sus for an atomic load/store. Even if purely single threaded I'd probably write a function to do that increment with a compare and swap, just to leave my mind at ease...)
This also stood out to me, I'm undercaffeinated and was up too late last night but seems like you don't want a sequencing point between the load and store which is what a CAS gives you

e:

Rust code:
thread::sleep(Duration::default());
curious about this line - not saying it's wrong but this is for my own edification: impl Default for Duration is 0 nanoseconds, but thread::sleep(nanoseconds(0)) is supposedly a no-op. What happens here in terms of the lock free behaviour here; is the point just to create a barrier for the compiler or does this actually do more than that (like the spin part of a userspace spinlock)?

Dijkstracula fucked around with this message at 16:21 on Sep 4, 2023

gonadic io
Feb 16, 2011

>>=
Speedwise, 3 but being very careful with uninit memory seems like it'd be the fastest. Why overwrite the previous images/uninit memory (on the first go around) until you need to? It's not like there's difficult control flow or only partial image writes or anything.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Jo posted:

I'm kicking myself a bit for rewriting a big chunk of this and working myself into a corner.

It looks like you're trying to roll your own SPSC ring buffer. Implementing custom synchronization primitives requires unsafe (and extreme care). You could try using an off the shelf ring buffer like that of the `rtrb` crate instead.

Jo
Jan 24, 2005

:allears:
Soiled Meat
A huge thank-you to everyone who replied. I'll try to get to all of the remarks and go through them when I'm home from work.

If anyone wants to look at the complete mess for their own curiosity: https://github.com/JosephCatrambone/MotionCaptureMk5/blob/main/src/camera_handler.rs

crazypenguin posted:

I just want to note that you never actually stated what the problem was. I assume that code didn't borrow check or work because of the returned reference lifetime, because that's what it sounds like from these solutions.

:doh: D'oh. Yeah, lifetime was the issue, but even with a lifetime annotation I got a compiler error about moving data from inside the vec.

crazypenguin posted:

I agree (4) should be out, there *should* be a workable enough way to avoid the allocation here. I'm assuming the performance would be nice here. :)

All of 1,2,3 seem like reasonable choices. I don't think 3 should be considered that terrible, considering it's something like what std::mem::replace/take do, so it's sort of a "pattern" in Rust.

(2) is probably what I would have chosen. I'm curious why you think it'd be a problem in the future? This is the most like many other solutions to "I want a reference to it but you can't *return* a reference".

I guess my neurotic aversion to taking a closure as a parameter is just that -- neurosis rather than based on any real evidence. There's no harm in me making a method and giving it a try, so maybe I will when I get back home.

crazypenguin posted:

The other maybe-but-haven't-thought-it-through possibility is something like MutexGuard. Return not the reference but something that's basically a reference but now has a lifetime parameter that's correctly constrained in the sense of "lives longer than the function call, but shorter than self's lifetime and shorter than next frame or whatever".

(Also, bit sus for an atomic load/store. Even if purely single threaded I'd probably write a function to do that increment with a compare and swap, just to leave my mind at ease...)

(Also also, I assume there are existing ring buffer-based channel implementations, are you sure you need a custom one?)

Dijkstracula posted:

A variation on #3 that you might like more would also be to have the elements of your ring buffer be an Option<image::RgbImage>, which makes the std::mem::take solution pretty straightforward (since taking ownership just replaces it with None)

This also stood out to me, I'm undercaffeinated and was up too late last night but seems like you don't want a sequencing point between the load and store which is what a CAS gives you

It's sounding like there's support for the mem::swap. I'm still wrapping my head around how exactly I'd make that happen, but maybe I'm undercaffeinated.

Dijkstracula posted:

e:

Rust code:
thread::sleep(Duration::default());
curious about this line - not saying it's wrong but this is for my own edification: impl Default for Duration is 0 nanoseconds, but thread::sleep(nanoseconds(0)) is supposedly a no-op. What happens here in terms of the lock free behaviour here; is the point just to create a barrier for the compiler or does this actually do more than that (like the spin part of a userspace spinlock)?

:stare: I did not realize this. It was a hold-over from years of Java programming -- sleep/yield(0) would relinquish thread control rather than actually sleeping. That's definitely not what I intended. I'll swap it for a delay by the frame time. Thank you for the correction -- that would have really hurt.

Ralith posted:

It looks like you're trying to roll your own SPSC ring buffer. Implementing custom synchronization primitives requires unsafe (and extreme care). You could try using an off the shelf ring buffer like that of the `rtrb` crate instead.

I'd not heard of this, but it might save me a lot of trouble. Thank you!

EDIT: One of the reasons I was almost-but-not-quite trying to roll my own ring buffer was this: the camera interface can capture to an _existing buffer_ and I was hoping to avoid allocating and pushing a new image for each frame. The existing ring buffer solutions will let a person push and pop, yes, but I didn't want to push and pop the data as much as I wanted to WRITE to the buffers and READ from them. So it was more of a... static buffer with spinning read/write heads? The difference being just whether there's an allocation/deallocation of an image. It's probably premature optimization but I've been bitten before by memory pressure when it comes to streaming video data.

Jo fucked around with this message at 19:13 on Sep 4, 2023

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

Jo posted:

:stare: I did not realize this. It was a hold-over from years of Java programming -- sleep/yield(0) would relinquish thread control rather than actually sleeping. That's definitely not what I intended. I'll swap it for a delay by the frame time. Thank you for the correction -- that would have really hurt.

Yeah, I figured you' were trying to just park the thread, Java-style - I should have also said that if you wanted to just relinquish control back to the scheduler, thread::yield_now() might be the thing you're after.

Jo
Jan 24, 2005

:allears:
Soiled Meat
Yup! That is indeed exactly what I was looking for. Thank you!

VikingofRock
Aug 24, 2008




Jo posted:

I'd not heard of this, but it might save me a lot of trouble. Thank you!

EDIT: One of the reasons I was almost-but-not-quite trying to roll my own ring buffer was this: the camera interface can capture to an _existing buffer_ and I was hoping to avoid allocating and pushing a new image for each frame. The existing ring buffer solutions will let a person push and pop, yes, but I didn't want to push and pop the data as much as I wanted to WRITE to the buffers and READ from them. So it was more of a... static buffer with spinning read/write heads? The difference being just whether there's an allocation/deallocation of an image. It's probably premature optimization but I've been bitten before by memory pressure when it comes to streaming video data.

It looks like rtrb's Producer and Consumer implement Write and Read, respectively, so I think it might do what you want here? Worth at least trying IMO

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Jo posted:

EDIT: One of the reasons I was almost-but-not-quite trying to roll my own ring buffer was this: the camera interface can capture to an _existing buffer_ and I was hoping to avoid allocating and pushing a new image for each frame. The existing ring buffer solutions will let a person push and pop, yes, but I didn't want to push and pop the data as much as I wanted to WRITE to the buffers and READ from them. So it was more of a... static buffer with spinning read/write heads?

Yes, that's the idea behind ring buffers. Use a ring buffer of `u8` or whatever and write complete frames to it directly.

gonadic io
Feb 16, 2011

>>=
Rust libs, especially specialist data structures like ring buffers, almost always will contort themselves to be pretty efficient and you can usually rely on them to not allocate when they don't have to

Jo
Jan 24, 2005

:allears:
Soiled Meat

Ralith posted:

Yes, that's the idea behind ring buffers. Use a ring buffer of `u8` or whatever and write complete frames to it directly.

I must be doing a crap job of explaining. :shobon: For that I do apologize.

The ring buffer in rtrb offers two methods that we care about, push() and pop(). I can push() a [u8] onto the ring buffer, but once the consumer pops the [u8] I can't reuse it, right? I would have to keep allocating new [u8]s, dumping the image data to them, and pushing them onto the ring buffer as the consumer pops them?

What I had with the original setup:

code:
[(R)img a, (W)img b, img c]

// Write head is at 'img b', so copy image buffer to img b, then move write head forward to img c because that's where we'll write next.

[(R)img a, img b, (W)img c]

// Write head is at image c.  Copy buffer into it and advance the write head.

[(RW)img a, img b, img c]

// Write head is AT the read head so we can't do anything.  When the read had moves forward we can update.

// The reading thread reads the reference but does not take ownership of the data.  We can now write over image a.

[(W)img a, (R)img b, img c]
The ring buffers I've seen so far work like this:
code:
[(R)img, (W)null, null]

// Write head is at pos1 and that's empty, so we allocate a buffer and copy the data into it.

[(R)img, newim, (W)null] // Note that newim is an allocation.  We had to allocate because it was null!

// Read pops the data, meaning we can't write to it.

[null, (R)newim, (W)null]
My sorta' gross solution based on feedback was this. It seems to work pretty okay and has flat memory usage, but I have to benchmark performance. Since the find is on a VERY small array I'm not super concerned, but it's not the prettiest code. Since it works, it feels fast enough, and it's very low memory, I might just leave it and come back to it.

code:
let mut allocated_images: Vec<Arc<Mutex<image::RgbImage>>> = vec![];
// Omitting camera locking and stuff here.

// Check if we have any images that were deallocated.  Reuse them.
let img = {
	loop {
		let maybe_image = allocated_images.iter().find(|&i| { Arc::strong_count(i) < 2 });
		if allocated_images.len() < max_buffer_size && maybe_image.is_none() {
			println!("Allocating new image.");
			let i = Arc::new(Mutex::new(image::RgbImage::new(camera_resolution.width(), camera_resolution.height())));
			allocated_images.push(i.clone());
			break i
		} else if let Some(i) = maybe_image {
			println!("Reusing image.");
			break i.clone()
		} else {
			continue;
		}
	}
};

camera.write_frame_to_buffer::<pixel_format::RgbFormat>(img.lock().unwrap().deref_mut());
match tx.send(img) { ... }

gonadic io
Feb 16, 2011

>>=
In the rtrb crate you want to have a ring buffer where the elements T is u8 NOT [u8] with capacity n*sizeof(image) and use read_chunk and write_chunk (possibly _uninit). They give you immutable and mutable references to the data respectively and you can overwrite the bytes without reallocating. It's slightly awkward to deal with generic byte slices instead of Image structs as your elements that you might expect but it absolutely does what you want it to.

Jo
Jan 24, 2005

:allears:
Soiled Meat

gonadic io posted:

In the rtrb crate you want to have a ring buffer where the elements T is u8 NOT [u8] with capacity n*sizeof(image) and use read_chunk and write_chunk (possibly _uninit). They give you immutable and mutable references to the data respectively and you can overwrite the bytes without reallocating. It's slightly awkward to deal with generic byte slices instead of Image structs as your elements that you might expect but it absolutely does what you want it to.

I glossed over those method. You're exactly right: read/write chunk looks perfect. :dance:

teen phone cutie
Jun 18, 2012

last year i rewrote something awful from scratch because i hate myself
hello. i decided to learn Rust this week and to get my hands dirty with it, I decided to build an authorization API that I previously built in Python. It's going well so far

What I've got working is:

a fully dockerized rust HTTP server (using Actix), Postgres, and Nginx
I've got the app talking to the database and have written a few initial migrations to get the DB set up and have those run on app start
a bunch of endpoints returning dummy data like POST /register, POST /login, GET /profile

what I am struggling big time with is the actual rust code and was hoping someone could help me out. I know i should probably just take a pause and spend a couple days watching some more Youtube, but I'm really trying to learn by fire here.

So basically what I'm trying to do is return multiple errors to the user, based on the request body they send to me. So this for example:

JSON code:
// POST /register
{
   username: 'hello'
}
would return this

JSON code:
[{
   error: 'email missing',
   field: 'email'
}, {
   error: 'password missing',
   field: 'password'
}]
I understand Rust wants you to learn on compile-time errors, but I'm not quite understanding the best approach to getting the JSON, iterating over the keys, and creating a list of errors for each missing key.

Actix exposes the request JSON in either Bytes or actual JSON like so

Rust code:
use actix_web::{web, HttpRequest, HttpResponse};

pub async fn register(request: HttpRequest, request_body: web::Bytes) -> HttpResponse {
}
or

Rust code:
use actix_web::{web, HttpRequest, HttpResponse};

pub async fn register(request: HttpRequest, request_body: web::Json<SomeStruct>) -> HttpResponse {
}
I tried writing up a validator method like this, but I'm not really sure the best way to actually convert the JSON into a format this method expects:

Rust code:
use serde::Serialize;
use serde_json::Value;

#[derive(Clone, Serialize)]
pub struct BadPayload {
    error: String,
    field: String,
}

// first argument is your payload
// second is the keys that are required
pub fn validate_payload(
    payload: Value,
    keys: Vec<&str>,
) -> Result<(), Vec<BadPayload>> {
    let mut result = vec![];
    // iterate over the required keys and see if they exist in the payload
    for key in keys.iter() {
        match payload.get(&key.to_string()) {
            Option::Some(val) => (),
            Option::None => result.push(BadPayload {
                error: format!("key {} is missing.", key),
                field: key.to_string(),
            }),
        };
    }

    match result.is_empty() {
        true => Ok(()),
        false => Err(result.clone()),
    }
}

Adbot
ADBOT LOVES YOU

teen phone cutie
Jun 18, 2012

last year i rewrote something awful from scratch because i hate myself
okay after some battling, here's what I came up with

the endpoint:

Rust code:
pub async fn register(request: HttpRequest, request_body: web::Bytes) -> HttpResponse {
    let maybe_info: Info;

    let register = LoginResponse {
        token: "1234".to_string(),
        username: "dummy-user".to_string(),
    };

    // return either 400 or 200 based on the existance of the errors
    match validate_json(&request_body, &vec!["username", "password"]) {
        Ok(result) => HttpResponse::Ok().json(web::Json(register)),
        Err(err) => HttpResponse::Ok().json(web::Json(err))
    }
}
and the actual validating function:

Rust code:
fn validate_json<'a>(
    json: &web::Bytes,
    keys: &'a Vec<&str>,
) -> Result<HashMap<&'a str, serde_json::Value>, Vec<BadPayload>>  {
    // convert json to string
    let json_as_string = &String::from_utf8(json.to_vec()).unwrap();
    // create a hashmap from the passed json
    let lookup: HashMap<String, serde_json::Value> = serde_json::from_str(json_as_string).unwrap();
    // another hashmap for only the key/value pairs we choose to accept from the user
    let mut map = HashMap::new();

    let mut errors = vec![];

    // iterate over all the allowed keys
    for key in keys {
        // if we can find it in the json, copy it to the hashmap
        match lookup.get(*key) {
            Option::Some(val) => {
                map.insert(*key, val.clone());
            },
            // if not, add it to the error vector
            Option::None => {
                errors.push(BadPayload {
                    error: format!("key {} is missing.", *key),
                    field: key.to_string(),
                });
            }
        };
    }

    // return based on error vector existance
    match errors.is_empty() {
        true => Ok(map),
        false => Err(errors.clone()),
    }
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply