Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Jo
Jan 24, 2005

:allears:
Soiled Meat
I've been dicking around with Rust 1.0 recently and started fiddling with some Project Euler exercises. One thing that has been biting me is this: at times I'd like to use a variable x as a usize and other times as u64. In this case, I have one function which builds a sieve for the sieve of aristophanes (Vec requiring a usize) and another which does a multiplication into this requiring a u64. Is there an idiomatic way in Rust to use a numeric type, or am I attacking this from entirely the wrong angle?

Adbot
ADBOT LOVES YOU

Jo
Jan 24, 2005

:allears:
Soiled Meat

Jsor posted:

You can do a type conversion with (x as usize), if you're on a 64-bit machine this is probably a no-op at runtime. Though it sounds like your multiplication function should probably be generic over the Mul trait.

Unless you're multiplying by a constant. At the moment, unfortunately there's no good way to do generic arithmetic with any constants.

All constants. I have two functions, one which finds the prime factors of a number (u64), and another which builds a sieve (which I'd use u64 for, but the array constructor requires usize). I'll see if I can copy my code over to this machine to share. I feel a bit bad using 'as usize' all over the place, but if there's no better way, I'll deal with it.

Jo
Jan 24, 2005

:allears:
Soiled Meat
EDIT: Got it. :3:

Leaving old buggy playground link for prosperity. For future self, used std::str::SplitWhitespace.

http://is.gd/PMopF8

Jo fucked around with this message at 20:35 on May 5, 2016

Jo
Jan 24, 2005

:allears:
Soiled Meat
I'm bumping my head against a module error. Most of the people online seem to say "modules are confusing" and I agree with that sentiment.

I have the following structure:

code:
\ <myapp>
 | Cargo.toml
 \  <src>
   | app.rs
   | geometry.rs
   | settings.rs
   | main.rs
geometry.rs has the following:

code:
mod geometry {

#[derive(Copy, Clone)]
pub struct Vec2<T> {
        pub x: T,
        pub y: T
}

#[derive(Copy, Clone)]
pub struct Vec3<T> {
        pub x: T,
        pub y: T,
        pub z: T
}
pub typedef Vec2f = Vec2<f64>;
pub typedef Vec3f = Vec3<f64>;

}
When I try to use it in app.rs, I see

code:
src/app.rs:2:5: 2:13 error: unresolved import `geometry`. There is no `geometry` in `???` [E0432]
src/app.rs:2 use geometry;
That happens if I use `use geometry` in app.rs. If I use `mod geometry`, in those places, instead I get

code:
src/app.rs:2:5: 2:13 error: cannot declare a new module at this location
src/app.rs:2 mod geometry;
                 ^~~~~~~~
src/app.rs:2:5: 2:13 note: maybe move this module `app` to its own directory via `app/mod.rs`
src/app.rs:2 mod geometry;
                 ^~~~~~~~
src/app.rs:2:5: 2:13 note: ... or maybe `use` the module `geometry` instead of possibly redeclaring it
src/app.rs:2 mod geometry;
The above also appears if I remove the mod geometry { ... } from geometry.rs.

If I use instead `use geometry::*;` I get this:

code:
src/app.rs:2:5: 2:13 error: unresolved import `geometry::*`. Maybe a missing `extern crate geometry`? [E0432]
src/app.rs:2 use geometry::*;
                 ^~~~~~~~
So I'm kinda' confused about when I should use 'mod' and when I should use 'use'. Online docs don't seem to be helping. Do I _have_ to use a folder with a mod.rs?


EDIT: It looks like the problem is I've got main.rs -> app.rs -> geometry.rs. I can't use the flat mapping with this setup and have to use directories.

My new (messy) directory structure is this:

code:
\ myprogram
||- main.rs
|\ <app>
||- mod.rs
||\ <settings>
|||- mod.rs
||\ <geometry>
|||- mod.rs

main.rs references app.rs and settings.rs (for assorted config details).
app.rs references settings.rs and geom.rs.
settings.rs references geom.rs.

Jo fucked around with this message at 00:53 on May 17, 2016

Jo
Jan 24, 2005

:allears:
Soiled Meat

Vanadium posted:

For that to work out you want no mod lines anywhere but main.rs, main.rs has mod app; mod geometry; mod settings;, and everybody else has use app; use geometry; use settings; as necessary.

mod items are for defining your crate's tree structure, you pretty much only ever want one mod line per module in your whole crate, probably at the highest level at which you want that module to be used. use items just bring stuff into scope so you don't have to use absolute paths that start with :: everywhere.

Thank you so much! That did it.

Jo
Jan 24, 2005

:allears:
Soiled Meat
I'm sorta' stuck on generics and inheritance now. I'm defining `trait Node` which has a few attributes. I'd like to make a Graph struct which has a HashMap <String, Node> in it. Is there any way to make graph accept any mix of types, so long as they implement Node? Box them?

Jo fucked around with this message at 01:22 on Jun 2, 2016

Jo
Jan 24, 2005

:allears:
Soiled Meat
I noticed recently that when developing I'm "just getting it to compile". That might mean adding & in places or doing foo.to_string() instead of foo. Am I going to be leaking memory like crazy or painting myself into a corner, or can I assume reasonably correct behavior as long as the code compiles (assuming no logic bugs). Rust's doc makes it seem like leaking memory is acceptable, but I'm in a weird state of wanting to free stuff without really having a way to do so aside from drop().

Jo
Jan 24, 2005

:allears:
Soiled Meat

Jsor posted:

I mean, there are no guarantees about where memory is freed except that it will always be freed sometime between when it's never referenced again and when it goes out of scope, the compiler is free to optimize around that AFAIK, but it will always be dropped by the time it goes out of scope (except for weird cases involving circular reference counters). I'm not sure what you're really asking about. If you absolutely need to free memory NOW and you can't use scoping to achieve it, that's what drop is for. But yes, you should assume memory is alive until you explicitly call drop or the object goes out of scope.

Also, apparently (&str).to_owned() is faster than (&str).to_string() for some reason.

I'm just worried about building a heaping pile of poo poo because I'm "just getting it done" instead of taking the time to jump back into the docs and running the analytical route. This is entirely a personal project, so I'm trying to concern myself with more of the architectural aspects than I am with the details of the code. It's a prototype engine based on Glium for Awful Jam next month, if that makes a difference.

Rust is in a strange place for me because it sits right between the managed stuff I've done in Python and Java and the completely manual stuff I've done in C. I feel like I should be calling malloc and free and worrying about & vs * vs [], and the fact that I can just kinda' write whatever, fix the compiler warnings, and have it work means I'm suspicious that I'm missing something fundamental.

Jo
Jan 24, 2005

:allears:
Soiled Meat
I'm a little unsure how to clear up this borrowing pattern. I have a Graph object and I want to modify (for the sake of memoization), but I've got a bit match happening in it.

code:
    fn get_derivative(&mut self, node_id : NodeId, wrt : &[NodeId], input_map : &HashMap<NodeId, Vec<f32>>) -> (Vec<f32>, Vec<f32>) {
...
        match self.nodes[node_id].operation { // I guess this is an immutable borrow which keeps us from doing the 'mutable borrow' below.
            Operation::MatrixMultiply(n1, n2) => {
                let (a_real, a_res) = self.get_derivative(n1, &wrt, &input_map);  // This is a mutable borrow.
Not sure how I should be rearranging my code to avoid this borrow.

Jo fucked around with this message at 06:27 on Aug 19, 2016

Jo
Jan 24, 2005

:allears:
Soiled Meat

rjmccall posted:

I think it's that you're trying to use self while it's borrowed (which it is because you're borrowing one of its fields). Can you pull out the node as a value whose lifetime isn't dependent on self, so that self can be un-borrowed by the time you want to call a method on it?

Doesn't seem like it. :( When I pulled the node as a value it just complained about moving out of the indexed context.

I guess another way of asking would be, "How the hell do I handle mutable borrows with recursion?"

EDIT: I can use separate hashmaps to memoize, but it's not a pretty solution.

Jo fucked around with this message at 06:55 on Aug 19, 2016

Jo
Jan 24, 2005

:allears:
Soiled Meat
I want to do an SF meetup. How'd you find yours? Meetup.com?

Jo
Jan 24, 2005

:allears:
Soiled Meat
Speaking of traits, I've defined this type 'Skeleton' which I'm using as the "parent".

code:
trait Skeleton {
	fn load<T : Skeleton>(filename : &String) -> T;
	fn get_all_transform_matrices_at_time(&self, animation_num : u8, frame_time : f32, blend_type : BlendType) -> HashMap<String, Matrix<f32>>;
	fn get_all_transform_matrices(&self, animation_num : u8, frame_num : u8) -> HashMap<String, Matrix<f32>>;
}
code:
111 | 			BVHSkeleton::new()
    | 			^^^^^^^^^^^^^^^^^^ expected type parameter, found struct `bvh::BVHSkeleton`
    |
    = note: expected type `T`
    = note:    found type `bvh::BVHSkeleton`
I'd like to define a 'load' method which takes a filename, since each different skeleton format (BVH, C3D, Blend, etc) will probably have its own skeleton struct.

Problem is the load() method should be returning a physical item, and I think for a trait object to be instanced it needs &self in all the methods. As a workaround, I'll probably get rid of the trait and just have a single skeleton type with a bunch of loaders. Still, it would be nice in future development to know how to do this.

Jo
Jan 24, 2005

:allears:
Soiled Meat

Ralith posted:

I think you want
code:
trait Skeleton {
    fn load(filename : &String) -> Self;
    // ...
}
Your original code said that each implementation of Skeleton must define a function which can load a given file as any implementation of Skeleton whatsoever. Type parameters are provided by the caller, remember.

Aha! Yes! That seems to have done it! Thanks! :woop:

Jo
Jan 24, 2005

:allears:
Soiled Meat
impl Trait looks pretty goddamn good. I could have used that a few months back with my compute graph implementation. Would probably have let me split up my node type a lot.

Jo
Jan 24, 2005

:allears:
Soiled Meat
I'm kicking myself a bit for rewriting a big chunk of this and working myself into a corner.

I'm processing a stream of video data at ~120Hz. I have a preallocated ring buffer with space for N image captures. The buffers are also preallocated, so the camera thread is just going through, picking the next element in the ring buffer, and copying contents to that. Works pretty okay.

Or it would, if I didn't have to hand out a reference to elements in the ring buffer.

code:
	pub fn read_next_frame_blocking(&mut self) -> &image::RgbImage {
		// Important safety tip: WE DO NOT RELEASE THE REFERENCE TO THE PREVIOUS RGBIMAGE UNTIL THIS READ FRAME IS CALLED AGAIN.
		let rh = (self.ring_buffer_read_position.load(Ordering::Relaxed) + 1) % self.ring_buffer_size;
		self.ring_buffer_read_position.store(rh, Ordering::Relaxed);
		loop {
			if self.ring_buffer_write_position.load(Ordering::Relaxed) == rh {
				// Buffer is empty.
				thread::sleep(Duration::default());
			} else {
				break;
			}
		}
		let ring_buffer_read_lock = self.ring_buffer.read().expect("Unable to get read lock on ring buffer.");
		let image_ref = ring_buffer_read_lock.get(rh).expect("Failed to unwrap image at read head. This can't happen.");
		image_ref
	}
Before this I was just dumping stuff to a channel, but that involved an absurd number of allocations and there was great sadness.

I have four ideas for solutions but none of them seem very elegant:

1) Go back to the approach that used channels, but instead of just tx/rx have a tx/rx for sending images AND a tx/rx for 'returning' images which can get reused. Pretty elegant in theory and the lack of locking probably will be faster. Problem: having to return images after being done processing feels too much like manual memory management.
2) Pass a lambda into the read_next_frame_blocking. I rather like this, but I think using closures to do stuff could lead to more hardship in the future.
3) Have another fake image that I swap into the Vec in place of the actual image, then swap it back at return time. This feels risky and dumb.
4) Go back to the channel approach. Bummer to throw away the work and bummer to have the allocations, but it's maybe the nicest going forward.

I'm sure there's a better approach to all of this but I think maybe I've got tunnel vision and I'm just not seeing the obvious. Will probably sleep on it and try again tomorrow night, but anyone sees it that would be appreciated.

EDIT: I ended up with a variant of #1. I throw Arc<ImageData> into the pipe and use Arc::strong_count(i) to check if images are dropped elsewhere. It's not _as_ ergonomic as passing around just the refs, but I'm really happy with it.

Jo fucked around with this message at 06:57 on Sep 4, 2023

Jo
Jan 24, 2005

:allears:
Soiled Meat
A huge thank-you to everyone who replied. I'll try to get to all of the remarks and go through them when I'm home from work.

If anyone wants to look at the complete mess for their own curiosity: https://github.com/JosephCatrambone/MotionCaptureMk5/blob/main/src/camera_handler.rs

crazypenguin posted:

I just want to note that you never actually stated what the problem was. I assume that code didn't borrow check or work because of the returned reference lifetime, because that's what it sounds like from these solutions.

:doh: D'oh. Yeah, lifetime was the issue, but even with a lifetime annotation I got a compiler error about moving data from inside the vec.

crazypenguin posted:

I agree (4) should be out, there *should* be a workable enough way to avoid the allocation here. I'm assuming the performance would be nice here. :)

All of 1,2,3 seem like reasonable choices. I don't think 3 should be considered that terrible, considering it's something like what std::mem::replace/take do, so it's sort of a "pattern" in Rust.

(2) is probably what I would have chosen. I'm curious why you think it'd be a problem in the future? This is the most like many other solutions to "I want a reference to it but you can't *return* a reference".

I guess my neurotic aversion to taking a closure as a parameter is just that -- neurosis rather than based on any real evidence. There's no harm in me making a method and giving it a try, so maybe I will when I get back home.

crazypenguin posted:

The other maybe-but-haven't-thought-it-through possibility is something like MutexGuard. Return not the reference but something that's basically a reference but now has a lifetime parameter that's correctly constrained in the sense of "lives longer than the function call, but shorter than self's lifetime and shorter than next frame or whatever".

(Also, bit sus for an atomic load/store. Even if purely single threaded I'd probably write a function to do that increment with a compare and swap, just to leave my mind at ease...)

(Also also, I assume there are existing ring buffer-based channel implementations, are you sure you need a custom one?)

Dijkstracula posted:

A variation on #3 that you might like more would also be to have the elements of your ring buffer be an Option<image::RgbImage>, which makes the std::mem::take solution pretty straightforward (since taking ownership just replaces it with None)

This also stood out to me, I'm undercaffeinated and was up too late last night but seems like you don't want a sequencing point between the load and store which is what a CAS gives you

It's sounding like there's support for the mem::swap. I'm still wrapping my head around how exactly I'd make that happen, but maybe I'm undercaffeinated.

Dijkstracula posted:

e:

Rust code:
thread::sleep(Duration::default());
curious about this line - not saying it's wrong but this is for my own edification: impl Default for Duration is 0 nanoseconds, but thread::sleep(nanoseconds(0)) is supposedly a no-op. What happens here in terms of the lock free behaviour here; is the point just to create a barrier for the compiler or does this actually do more than that (like the spin part of a userspace spinlock)?

:stare: I did not realize this. It was a hold-over from years of Java programming -- sleep/yield(0) would relinquish thread control rather than actually sleeping. That's definitely not what I intended. I'll swap it for a delay by the frame time. Thank you for the correction -- that would have really hurt.

Ralith posted:

It looks like you're trying to roll your own SPSC ring buffer. Implementing custom synchronization primitives requires unsafe (and extreme care). You could try using an off the shelf ring buffer like that of the `rtrb` crate instead.

I'd not heard of this, but it might save me a lot of trouble. Thank you!

EDIT: One of the reasons I was almost-but-not-quite trying to roll my own ring buffer was this: the camera interface can capture to an _existing buffer_ and I was hoping to avoid allocating and pushing a new image for each frame. The existing ring buffer solutions will let a person push and pop, yes, but I didn't want to push and pop the data as much as I wanted to WRITE to the buffers and READ from them. So it was more of a... static buffer with spinning read/write heads? The difference being just whether there's an allocation/deallocation of an image. It's probably premature optimization but I've been bitten before by memory pressure when it comes to streaming video data.

Jo fucked around with this message at 19:13 on Sep 4, 2023

Jo
Jan 24, 2005

:allears:
Soiled Meat
Yup! That is indeed exactly what I was looking for. Thank you!

Jo
Jan 24, 2005

:allears:
Soiled Meat

Ralith posted:

Yes, that's the idea behind ring buffers. Use a ring buffer of `u8` or whatever and write complete frames to it directly.

I must be doing a crap job of explaining. :shobon: For that I do apologize.

The ring buffer in rtrb offers two methods that we care about, push() and pop(). I can push() a [u8] onto the ring buffer, but once the consumer pops the [u8] I can't reuse it, right? I would have to keep allocating new [u8]s, dumping the image data to them, and pushing them onto the ring buffer as the consumer pops them?

What I had with the original setup:

code:
[(R)img a, (W)img b, img c]

// Write head is at 'img b', so copy image buffer to img b, then move write head forward to img c because that's where we'll write next.

[(R)img a, img b, (W)img c]

// Write head is at image c.  Copy buffer into it and advance the write head.

[(RW)img a, img b, img c]

// Write head is AT the read head so we can't do anything.  When the read had moves forward we can update.

// The reading thread reads the reference but does not take ownership of the data.  We can now write over image a.

[(W)img a, (R)img b, img c]
The ring buffers I've seen so far work like this:
code:
[(R)img, (W)null, null]

// Write head is at pos1 and that's empty, so we allocate a buffer and copy the data into it.

[(R)img, newim, (W)null] // Note that newim is an allocation.  We had to allocate because it was null!

// Read pops the data, meaning we can't write to it.

[null, (R)newim, (W)null]
My sorta' gross solution based on feedback was this. It seems to work pretty okay and has flat memory usage, but I have to benchmark performance. Since the find is on a VERY small array I'm not super concerned, but it's not the prettiest code. Since it works, it feels fast enough, and it's very low memory, I might just leave it and come back to it.

code:
let mut allocated_images: Vec<Arc<Mutex<image::RgbImage>>> = vec![];
// Omitting camera locking and stuff here.

// Check if we have any images that were deallocated.  Reuse them.
let img = {
	loop {
		let maybe_image = allocated_images.iter().find(|&i| { Arc::strong_count(i) < 2 });
		if allocated_images.len() < max_buffer_size && maybe_image.is_none() {
			println!("Allocating new image.");
			let i = Arc::new(Mutex::new(image::RgbImage::new(camera_resolution.width(), camera_resolution.height())));
			allocated_images.push(i.clone());
			break i
		} else if let Some(i) = maybe_image {
			println!("Reusing image.");
			break i.clone()
		} else {
			continue;
		}
	}
};

camera.write_frame_to_buffer::<pixel_format::RgbFormat>(img.lock().unwrap().deref_mut());
match tx.send(img) { ... }

Adbot
ADBOT LOVES YOU

Jo
Jan 24, 2005

:allears:
Soiled Meat

gonadic io posted:

In the rtrb crate you want to have a ring buffer where the elements T is u8 NOT [u8] with capacity n*sizeof(image) and use read_chunk and write_chunk (possibly _uninit). They give you immutable and mutable references to the data respectively and you can overwrite the bytes without reallocating. It's slightly awkward to deal with generic byte slices instead of Image structs as your elements that you might expect but it absolutely does what you want it to.

I glossed over those method. You're exactly right: read/write chunk looks perfect. :dance:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply