|
Having had to write a bunch of fixed point cal/comp math for instrumentation firmware, I would recommend avoiding it if possible. Equivalent code with floats is way easier to read and harder to gently caress up in a painful way, and you have less analysis to do to ensure overflow doesn't occur and you have sufficient granularity. If one single fmul on Cortex M0+ hadn't blown through the majority of my cycle budget I would have used floats and never looked back. Semi-related Fun Fact: M0/M0+ doesn't even have an integer divide instruction!
|
# ? May 22, 2020 03:45 |
|
|
# ? Apr 24, 2024 22:56 |
|
If you don't need lots of precision through intermediate calculations but also don't have hardware float32, I find it easier to think about things as an integer value in a changed unit. i.e. instead of having a fixed point value in degrees with an implicit binary point after some bit, have it be an integer value in hundreths of a degree. Doing it in base-10 makes it easy to think about roundoff behavior and to read/write literal values.
|
# ? May 22, 2020 05:12 |
|
An observation: If messing with things like this, it sounds wise to #1 separate/abstract, so your program code acts like it's manipulating normal numbers, and #2: use third party libs where the work's done if able. This feels like a trap where optimization and program logic collide to cause tough-to-read/write code, and bugs. Btw, this STM32Cube IDE is great. It exposures most (all?) chip settings, pin functions, dev board extras etc in a UI, which generates config code, so you know what it's capable of, what the defaults are etc, in a way more directly linked to the code than reading a datasheet. It's even helping as a reference in a non-Cube HAL project using the same chip/board. Ie all I have for that is API docs, without much hint to default values. SPI polarity? Phase? What are those? Ref the IDE and see what they're set to. Dominoes fucked around with this message at 05:24 on May 22, 2020 |
# ? May 22, 2020 05:18 |
|
Foxfire_ posted:If you don't need lots of precision through intermediate calculations but also don't have hardware float32, I find it easier to think about things as an integer value in a changed unit. i.e. instead of having a fixed point value in degrees with an implicit binary point after some bit, have it be an integer value in hundreths of a degree. Doing it in base-10 makes it easy to think about roundoff behavior and to read/write literal values. ? That's what fixed point is. You still have to scale after a multiplication, because 5 (hundredths of a degree) times 5 (hundredths of a degree) is not 25 (hundredths of a degree). e: unless you’re also tracking your units separately (25 squared hundredths of a degree)
|
# ? May 22, 2020 08:40 |
|
Yes, tracking the units separately (usually in a suffix on the variable) and then converting them as an explicitly separate step instead of part of the operation on the value. It is not at all different from what you do in a generic fixed point operation, it's just easier for me to think about. e.g. code:
|
# ? May 22, 2020 09:40 |
|
Zopotantor posted:? That's what fixed point is. You still have to scale after a multiplication, because 5 (hundredths of a degree) times 5 (hundredths of a degree) is not 25 (hundredths of a degree). Five degrees times five degrees is not twenty-five degrees either, so I'm not sure how that's different? edit: For clarity, I get that 1 dm² is not 0.1m², I'm just being difficult - and your edit does explicitly cover that. Computer viking fucked around with this message at 15:47 on May 22, 2020 |
# ? May 22, 2020 12:23 |
|
Computer viking posted:Five degrees times five degrees is not twenty-five degrees either, so I'm not sure how that's different? Clearly the result is in stedegrees.
|
# ? May 22, 2020 14:51 |
|
Like foxfire said, the natural approach there would be to keep track of units. While milli-something × milli-something doesn't map directly to something × something, you can map milli-somthing² to somthing² and do the conversions as the first and last steps.
|
# ? May 22, 2020 14:56 |
|
Somehow it's easier for me to think through: "Approximate multiplying an integer by 1.234 using only integer math" than "Encode 1.234 as fixed point number, multiply it with another fixed point number, then do the scaling" Even though both of them end up at (x * 1234)/1000 and it's exactly the same thing. (I regret bringing it up since it's probably idiosyncratic to my brain)
|
# ? May 22, 2020 18:36 |
|
To be clear, what Foxfire is describing is a decimal fixed point, whereas what embedded toolchains provide is usually a binary fixed point. Fixed-point vs. floating-point is technically orthogonal to binary vs. decimal, although of course if you want hardware support, you’re at the whim of your architecture design. Fixed point is naturally great at addition and kindof punishingly bad at multiplication, especially for decimal. Fixed-point also has a code-size hit, especially if you’re using saturating math. Floating point’s biggest advantage, besides the inherent advantages of a single type working for a lot of different applications, is that it degrades much more nicely if you get a bit outside the range you were expecting.
|
# ? May 22, 2020 18:53 |
|
Binary fixed point doesn't seem especially bad at multiplication, if you have a hardware integer multiplier to leverage. Certainly no more difficult than a floating-point multiply, unless you have explicit hardware support for one but not the other.
|
# ? May 22, 2020 19:26 |
|
Jabor posted:Binary fixed point doesn't seem especially bad at multiplication, if you have a hardware integer multiplier to leverage. Certainly no more difficult than a floating-point multiply, unless you have explicit hardware support for one but not the other. How do you deal with overflow?
|
# ? May 22, 2020 19:55 |
|
Running out of bits is a problem either way. Whether dramatic failure (fixed point overflow) or subtle failure (floating point droping precision) is worse is up to application and philosophy.
|
# ? May 22, 2020 20:00 |
|
I'm honestly kind of surprised that floating point multiply is slower than floating point add. Being split into an exponent and mantissa seems like it should make it much easier to compute the exponent of the product than the exponent of the sum. I guess maybe it's dominated by the slowness of multiplying the mantissas? I've coded a simple hardware floating point unit (no denormalized numbers) and multiply was definitely easier to implement.
|
# ? May 22, 2020 20:03 |
|
Absurd Alhazred posted:How do you deal with overflow? I would simply choose to never overflow. Tongue in cheek, but when I've used fixed point in the past (not often) I carefully chose the format and limited input and intermediate values to make it (hopefully!) impossible. Which was, shall we say, a bit of a headache
|
# ? May 22, 2020 20:04 |
|
Jeffrey of YOSPOS posted:I'm honestly kind of surprised that floating point multiply is slower than floating point add. Being split into an exponent and mantissa seems like it should make it much easier to compute the exponent of the product than the exponent of the sum. I guess maybe it's dominated by the slowness of multiplying the mantissas? I've coded a simple hardware floating point unit (no denormalized numbers) and multiply was definitely easier to implement. I'd naively expect a multiplier to be slower than an adder and a barrel shifter
|
# ? May 22, 2020 20:40 |
|
Qwertycoatl posted:I'd naively expect a multiplier to be slower than an adder and a barrel shifter Right. An FP multiplier is easier to implement given the existence of integer adders and multipliers, but an integer multiplier is just substantially more expensive to begin with.
|
# ? May 22, 2020 21:36 |
|
Absurd Alhazred posted:How do you deal with overflow? Dehumanize yourself and face to range analysis
|
# ? May 23, 2020 02:52 |
|
Any vcpkg fans here? Similar to this question, I’m trying to figure out some sort of non-lovely way to iterate on packages. Having to go through some sort of uninstall/build/reinstall dance for any edits isn’t going to work. Is there any way to point vcpkg at a local source tree rather than make its own copy? I’d rather manage the dependent source as submodules rather than portfiles anyhow. If another package manager can do this, I’m not committed to vcpkg just yet.
|
# ? May 24, 2020 20:03 |
|
Is there anything to help shorten the idiom ofcode:
I could do a macro w/ invoke_result to get the return type of the func but macros are hideous. Perhaps there is a slightly less hideous solution with templates. edit: if function try blocks were allowed on lambdas that would be perfect Dren fucked around with this message at 02:11 on May 28, 2020 |
# ? May 28, 2020 02:01 |
|
I don't think I follow. So you want to construct an object and then initialize it? Isn't the point of a constructor to avoid two-phase initialization?
|
# ? May 28, 2020 02:21 |
|
qsvui posted:I don't think I follow. So you want to construct an object and then initialize it? Isn't the point of a constructor to avoid two-phase initialization? He wants to call the constructor and if it fails, handle the exception immediately, rather than having the error-handling code all the way at the bottom of the function. One annoyance with this is that if you declare the local variable inside your try block, it goes out of scope at the end of it, so all the code that interacts with it also needs to be inside the try block - very annoying if you're trying to limit the scope of what exceptions you want to catch. So you need to declare the variable outside the try block and then do your "real" initialization inside it, which is separately annoying because now your variable can't be const. The mediocre C++ programmers that spend way too much time answering questions on stack overflow seem to have trouble comprehending why this is something that someone would want to do, rather than showing off their esoteric knowledge of some particularly elegant and clean way of doing it, so I suspect a horrible macro might be the only way to go.
|
# ? May 28, 2020 03:03 |
|
Use std::optional and emplace. You lose const; if that's a deal-breaker then you're in trouble, because you'll have to move everything logically into the initializer, which might make whatever control flow you're doing with try/catch difficult.
|
# ? May 28, 2020 03:41 |
|
Dren posted:Is there anything to help shorten the idiom of C++ code:
C++ code:
|
# ? May 28, 2020 05:17 |
|
rjmccall posted:Use std::optional and emplace. You lose const; if that's a deal-breaker then you're in trouble, because you'll have to move everything logically into the initializer, which might make whatever control flow you're doing with try/catch difficult. Thanks, this is helpful if the type doesn't have an assignment operator. It doesn't address the annoyance of put something in the outer scope for holding the type, be that an instance of the type itself or a container like std::optional, then "really" initialize it in the inner scope. I was trying to get declaration + initialization to happen on the same line, with a catch block around it. Syntax like this would be ideal: code:
Absurd Alhazred posted:How dumb is this? Seems pretty close to what I'd wanted. Thanks. With a bit of modification it even lets the type be const! The lambdas are a bit verbose. The place where this one falls down syntactically is in handling specific exception types. exceptionHandler has to dynamic cast e to all the possible exception types. Dren fucked around with this message at 16:31 on May 28, 2020 |
# ? May 28, 2020 16:28 |
|
I see a problem now in the thing I want to do. Say the syntax I proposed in my last post were allowed:code:
code:
edit: added invoke_result_t to get default value for T so the caller doesn't have to specify the type The exception handler has problems though. Notably, dynamic_cast throws std::bad_cast if a cast on a reference fails. I wonder if there's a way to provide multiple exceptionHandlers with a parameter pack and only call the best fit, kind of like what std::visit does for std::variant. Dren fucked around with this message at 17:10 on May 28, 2020 |
# ? May 28, 2020 16:40 |
|
You could have the exception handler accept a std::exception_ptr argument, then rethrow it inside the handler and dispatch to catch clauses as usual. Main downside is the exception gets thrown twice, but hopefully you aren't handling exceptions in performance critical code anyway. There's a good example here: https://en.cppreference.com/w/cpp/error/current_exception
|
# ? May 28, 2020 21:18 |
|
eth0.n posted:You could have the exception handler accept a std::exception_ptr argument, then rethrow it inside the handler and dispatch to catch clauses as usual. Main downside is the exception gets thrown twice, but hopefully you aren't handling exceptions in performance critical code anyway. That's pretty good. Unfortunately, even in its best iterations, this thing is less syntax sugar and more obfuscation. I cannot bring myself to put it into any production code seeing as the whole point was to make things more expressive and improve readability.
|
# ? May 28, 2020 22:31 |
|
I'm unclear on what exactly you're trying to accomplish. Construct an object, but if its constructor throws, replace it with a default-constructed one + run some error handling code?code:
(exceptions were a mistake)
|
# ? May 28, 2020 23:33 |
|
Constructors cannot fail, they can only be failed.Foxfire_ posted:(exceptions were a mistake)
|
# ? May 29, 2020 01:03 |
|
Oh man, I was figuring you were going to do something like return an error code if the construction failed. If you don't want to affect control flow - instead just make it a different value plus maybe run some side-effecting code - then it's probably quite doable to have a perfect forwarding template that accepts an extra lambda to call when the constructor throws.
|
# ? May 29, 2020 05:32 |
|
Foxfire_ posted:I'm unclear on what exactly you're trying to accomplish. Construct an object, but if its constructor throws, replace it with a default-constructed one + run some error handling code? That one is quite similar to what Absurd Alhazred posted. It does part of what I wanted and I posted an improvement to it where RTYPE is deduced so the callee doesn’t have to supply it. What I really want is to be able to declare and init a variable, where the init function or constructor might throw, and be able to handle any exceptions without separating the declaration and init statements. And I want the resulting syntax to be less work than this (or at least close to it): code:
|
# ? May 29, 2020 06:19 |
|
Hey dudes. What's the proper way to insert a class in a constructor by calling its constructor of the same name? Module in question The ADS1115 class I'm using as a template, since it works. C++ C code:
C code:
The error, using Arduino IDE: Bash code:
Dominoes fucked around with this message at 17:39 on May 29, 2020 |
# ? May 29, 2020 17:31 |
|
Yeah, if the member does not have a default-constructor, then it has to be initialized in the init list:C++ code:
|
# ? May 29, 2020 17:43 |
|
What would that look like in the full context of this?C++ code:
C++ code:
Dominoes fucked around with this message at 18:35 on May 29, 2020 |
# ? May 29, 2020 18:25 |
|
C++ code:
|
# ? May 29, 2020 18:39 |
|
Thank you very much! Works perfectly, and I learned a new syntax pattern.
|
# ? May 29, 2020 18:45 |
|
Marked up with what stuff means:C code:
it can figure out how much memory to reserve for an instance of MyClass, but nothing about how to implement the functions in it. Presumably later in another file, you have something like: code:
|
# ? May 29, 2020 18:51 |
|
You can put static float constants in the class declaration in modern C++.code:
|
# ? May 29, 2020 19:37 |
|
|
# ? Apr 24, 2024 22:56 |
|
Foxfire - thank you very much! I've saved your example locally as a reference to come back to whenever initializing fields and setting up classes/structs.
|
# ? May 29, 2020 20:14 |