Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
nielsm
Jun 1, 2009



Yeah I think the least painful option in the long run is to wrap the native C++ in a C++/CLI class library, and do the actual UI in C# with WPF, interfacing with the class library. Or if the native code actually has a C interface, you could do pure C# with P/Invoke. Or if it for some reason has a COM interface you could maybe use that. But writing a UI straight in C++/CLI will be constant jumping through syntactical hoops.

Adbot
ADBOT LOVES YOU

Absurd Alhazred
Mar 27, 2010

by Athanatos

hackbunny posted:

Since I don't see a Windows thread, I guess I'll ask here

What framework would you use to make a Windows UI? Outside of Qt, which my coworker has no experience in, and the first to say "HTML" will be shot in the face

I've recommended ImGui before. Not native-looking, but very light-weight, easy to deploy, easy to learn.

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

nielsm posted:

Yeah I think the least painful option in the long run is to wrap the native C++ in a C++/CLI class library, and do the actual UI in C# with WPF, interfacing with the class library. Or if the native code actually has a C interface, you could do pure C# with P/Invoke. Or if it for some reason has a COM interface you could maybe use that. But writing a UI straight in C++/CLI will be constant jumping through syntactical hoops.

I'm very wary about C++/CLI, because last time I used it, it seemed buggy and distinctly "second class". I'll use it to write a managed wrapper for the C++ core, and as little else as possible. All we lose by doing the UI in C# is a single file distribution, because the core will either be an external P/Invoked DLL or a mixed mode assembly, neither of which can be merged in the executable's assembly. But it's no great loss e: I stand corrected, the native linker (link.exe) can statically link C++/CLI intermediates, native libraries and netmodules in a single .exe

hackbunny fucked around with this message at 17:21 on Jan 17, 2018

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

Absurd Alhazred posted:

I've recommended ImGui before. Not native-looking, but very light-weight, easy to deploy, easy to learn.

Very cute, and this:

quote:

Dear ImGui allows you create elaborate tools as well as very short-lived ones. On the extreme side of short-liveness: using the Edit&Continue feature of modern compilers you can add a few widgets to tweaks variables while your application is running, and remove the code a minute later!

is just what I had been looking for, although not for this particular project. I'll certainly keep it in mind

Slurps Mad Rips
Jan 25, 2009

Bwaltow!

hackbunny posted:

Visual Studio 2017 is the only hard requirement. Ease of deployment a plus, so I'd like to avoid gigantic frameworks with their own installer (unless the gigantic framework is .NET). Other than that, go hog wild


Yes, C++ is a requirement, because the UI will interface with a pre-existing native core with a C++ API. The UI developer already erred on the side of a .NET UI over a native core, although he wanted to use Windows Forms, which I've never heard much good about, and isn't it unmaintained anyway? I'll tell him to have look at WPF before we commit to Windows Forms


Just in case, can you name any? The only one I know is WTL... which I'm glad to see is still being actively worked on

There's always C++/WinRT which is "pure" C++, comes with VS 2017 as of 15.3 and doesn't really require anything weird to deal with. There's a github you can browse if you want a better idea of what you'd be dealing with.

https://github.com/Microsoft/cppwinrt

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

Slurps Mad Rips posted:

There's always C++/WinRT which is "pure" C++, comes with VS 2017 as of 15.3 and doesn't really require anything weird to deal with. There's a github you can browse if you want a better idea of what you'd be dealing with.

https://github.com/Microsoft/cppwinrt

What does it interface with, though? WPF? UWP? Can you make desktop apps with it? I'm afraid my Windows development knowledge is a little outdated

Besides, we have committed to C# and WPF. It seemed the least painful choice in terms of tool support, finding people who can work in it, third party component availability, native look & feel, etc.

Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...

rjmccall posted:

The clever things you can do in the STL with iterators are cute but the usability (and safety) of the entire library is just godawful because of it. Iterator ranges let you have both, but really iterators are just overrated as a collection design primitive.

The STL obsession with iterators leads to horrors like "output iterators" that actually push_back to a vector, which is like... this is not how a reasonable library would have solved this problem.

It does not help that the language committee itself is still strictly opposed to anything like an extension-method feature.

The STL does have its warts, but it's still the best thing that happened to C++ in the last 25 years. I read the original STL paper when it was published internally at HP, and it was a revelation. It was just So Much Better than everything else we had at the time.

Of course, it then took like ten years until we actually got a compiler on HP-UX that we could use it with, but still.

Dren
Jan 5, 2001

Pillbug
What is the deal with std::basic_ios::rdbuf? Specifically, version 1 here: http://en.cppreference.com/w/cpp/io/basic_ios/rdbuf

I believe there ought to be two versions of the method. The const version should return a const std::basic_streambuf<CharT, Traits>* rather than a std::basic_streambuf<CharT, Traits>*. Then there should be a non-const version of the method that returns std::basic_streambuf<CharT, Traits>*.

My reasoning is that istream's seekg method, which repositions the underlying streambuf, is non-const. So why should the const rdbuf() method give an object that allows for a manipulation that is forbidden by another part of the constness contract? Here is some sample code emphasizing the weirdness:
code:
#include <iostream>
#include <sstream>

void into_buffer(const std::istream& is) {
  std::ostringstream oss;
  oss << is.rdbuf();  // ok even though it repositions the streambuf

  std::cout << oss.str() << std::endl;
}

void into_buffer_and_reset(const std::istream& is) {
  std::ostringstream oss;
  oss << is.rdbuf();  // ok even though it repositions the streambuf

  std::cout << oss.str() << std::endl;

//  is.seekg(0);  // not ok, can't call seekg to reposition the streambuf because seekg is not const
  is.rdbuf()->pubseekpos(0);  // ok, std::basic_streambug::pubseekpos isn't const but the pointer returned by rdbuf()
                              // also isn't const so it works
}

int main(int argc, char** argv) {
  std::string mystring("hello");
  std::istringstream iss(mystring);

  std::cout << iss.tellg() << std::endl;

  into_buffer(iss);

  std::cout << iss.tellg() << std::endl;

  into_buffer(iss);

  std::cout << iss.tellg() << std::endl;

  iss.seekg(0);

  std::cout << iss.tellg() << std::endl;

  into_buffer_and_reset(iss);

  std::cout << iss.tellg() << std::endl;

  into_buffer(iss);

  std::cout << iss.tellg() << std::endl;

  return 0;
}
and the output
code:
0
hello
5

5
0
hello
0
hello
5
I assume since this stuff has been around a long time there is a reason but this feels incorrect to me.

sarehu
Apr 20, 2007

(call/cc call/cc)
For UI libraries I saw a presentation of a Rust wrapper of libui (I think). It seemed simple enough / get poo poo done, so maybe that's a good option. On the other hand it might be "simplistic" moreso than "simple."

Slurps Mad Rips
Jan 25, 2009

Bwaltow!

hackbunny posted:

What does it interface with, though? WPF? UWP? Can you make desktop apps with it? I'm afraid my Windows development knowledge is a little outdated

Besides, we have committed to C# and WPF. It seemed the least painful choice in terms of tool support, finding people who can work in it, third party component availability, native look & feel, etc.

Oh totally glanced over you'd already gone with C# and WPF. C++/WinRT lets you write UWP apps, and I believe anything that would interface with UWP can call into the C++ code with little work.

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

Slurps Mad Rips posted:

Oh totally glanced over you'd already gone with C# and WPF.

Probably because I asked in this thread and the .NET thread at the same time :v: and I keep forgetting what I said in which thread

Xarn
Jun 26, 2015
If I have a binary compiled with /EHa, is there a way to tell whether a catch (...) block has been entered because of SEH or standard c++ exception? I already figured out how to disambiguate CLR exceptions.

Ultimately I want something like this:
C++ code:
try {
    // dumb things
} catch (...) {
    if (std::current_exception() == nullptr) {
        return "CLR exception";
    }
    if ( ??? ) {
        return "SEH"; 
    }
    return "C++ exception";
}
For structured exceptions the std::current_exception() returns a weird looking exception_ptr, but it is usable and can be normally thrown...

b0lt
Apr 29, 2005

Xarn posted:

If I have a binary compiled with /EHa, is there a way to tell whether a catch (...) block has been entered because of SEH or standard c++ exception? I already figured out how to disambiguate CLR exceptions.

Ultimately I want something like this:
C++ code:
try {
    // dumb things
} catch (...) {
    if (std::current_exception() == nullptr) {
        return "CLR exception";
    }
    if ( ??? ) {
        return "SEH"; 
    }
    return "C++ exception";
}
For structured exceptions the std::current_exception() returns a weird looking exception_ptr, but it is usable and can be normally thrown...

You can use _set_se_translator to translate an SEH exception to whatever you want.

General_Failure
Apr 17, 2005
Do my homework for me! Well, not really.
I'm staring at a typedef struct on a wiki page. It really doesn't seem right. I feel it should be some kind of union or something. Even then I'm not really sure how to use it.
link:
https://www.riscosopen.org/wiki/documentation/show/iic_transfer

or excerpt for the lazy:

code:
"C" style construct
typedef struct iic_transfer
{
  unsigned addr:8;
  unsigned :21;
  unsigned riscos_retry:1;
  unsigned checksumonly:1;
  unsigned nostart:1;
  union
  {   unsigned checksum;
      void *data;
  } d;
  unsigned len;
} iic_transfer;

Assembler method
  R0 points to the data block which is a list of transfers set out as:-
    Word 0
          bit 0    = Write/Read
          bit 1-7  = Address of device
          bit 8-28 = Reserved
          bit 29   = Retry flag
          bit 30   = Return checksum only flag
          bit 31   = Send no start flag
    Word 1 = Pointer to memory for data to be sent/received
             OR checksum value if bit 30 of word 0 set
    Word 2 = Length of data to send/receive

  This is repeated for the number of transfers required (R1)
Now I was just going to do it in C using bitmasking on the words, but the example struct seems interesting. Problem is I have no idea how to use it, if it is usable. I look at it and see a whole bunch of unsigned ints with certain bits kind of defined.

So is it usable? If not, how can it be made to dance? I guess also, if it is, how do I use it?
It probably seems silly to most, but I'm really not used to doing things this way.

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS
Reading and writing to those fields will generate instructions that will mask off only those bits and read/write to them. The bit order within memory is compiler-dependent though, so don't rely on it for a network protocol or something. (In practice if you only have one platform it's probably fine - I'm sure this example is that case given that it's .) So 8 bits will be used for the addr bitfield, that could just be a uint8_t itself. 21 bits are used reserved and not for you to use - nothing you write using this struct will set those bits no matter what. 1 bit is used for the retry field - reading from it will return 1 or 0.

You have to know more about the iic protocol to use it I think but yeah, that looks like a fine message header struct/bitfield example to me.

General_Failure
Apr 17, 2005

Jeffrey of YOSPOS posted:

Reading and writing to those fields will generate instructions that will mask off only those bits and read/write to them. The bit order within memory is compiler-dependent though, so don't rely on it for a network protocol or something. (In practice if you only have one platform it's probably fine - I'm sure this example is that case given that it's .) So 8 bits will be used for the addr bitfield, that could just be a uint8_t itself. 21 bits are used reserved and not for you to use - nothing you write using this struct will set those bits no matter what. 1 bit is used for the retry field - reading from it will return 1 or 0.

You have to know more about the iic protocol to use it I think but yeah, that looks like a fine message header struct/bitfield example to me.

So, does C throw code in for bit shifting or something? eg, can I just write 1 or 0 to "nostart"? or do I need to shift first before writing to it.

I'm not concerned about the IIC aspect. I just didn't get the typedef. What I want to do is just make sure I'm doing things right by talking to some devices I have (We are talking Raspberry Pi here) to make sure I'm doing it right. Then just slap a slightly more user friendly wrapper around the involved structures and SWIs and add it to the GPIO library I wrote a while ago. The GPIO library actually deals with the hardware directly, but presents a more friendly interface. When I wrote it GPIO access SWIs weren't included with RISC OS, so I just did a normal C library.

e: I found some info that actually answered that. No, I don't need to do any bit fiddling and apparently it doesn't allow larger values than can fit in the bitfield. Somehow I feel that last part is implementation specific.

General_Failure fucked around with this message at 07:56 on Jan 22, 2018

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS
Yeah the compiler will generate code to do the shifting and write to just those bits. You can just write 1 to nostart. The reason people stick typedefs around structs is so they can declare them with one identifier - in the future you can declare one with just "iic_transfer foo;" instead of "struct iic_header foo;" because you have the typedef around it. It's basically saying "create an alias for struct iic_transfer { unsigned bunchofthings:69; } called iic_transfer.

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS
Also the comment says this:
code:
          bit 0    = Write/Read
          bit 1-7  = Address of device
but it doesn't have a separate field for the addr and the write/read flag. Either that's wrong or I just don't know how iic works but that comment seems odd to me given that.

b0lt
Apr 29, 2005

Jeffrey of YOSPOS posted:

Also the comment says this:
code:
          bit 0    = Write/Read
          bit 1-7  = Address of device
but it doesn't have a separate field for the addr and the write/read flag. Either that's wrong or I just don't know how iic works but that comment seems odd to me given that.

The direction is part of the address.

Xarn
Jun 26, 2015

b0lt posted:

You can use _set_se_translator to translate an SEH exception to whatever you want.

Thanks, that looks fairly close to what I want.

VikingofRock
Aug 24, 2008




I'm trying to understand when pointer casts in C are safe, and I'm having trouble finding good resources on this topic. For example:

C++ code:
// This is actually C code, I just used the C++ tag to get syntax highlighting
#include <stdint.h>

struct Foo {
    int32_t x;
    int32_t y;
};

// Is this okay, assuming arr is big enough? Or are there alignment issues?
void f1(char *arr) {
    ((struct Foo *) arr)->x = 1;
}

// How about this, if `arr` initially pointed to a char array?
// What if it initially pointed to a struct Foo?
void f2(void *arr) {
    ((struct Foo *) arr)->x = 1;
}

// What about this?
void f3(void *arr) {
    uintptr_t next = ((uintptr_t) arr + sizeof(struct Foo));
    ((struct Foo *) next)->x = 1;
}

int main() {
    // simple case
    struct Foo foo = { 0, 0 };
    f1((char *) &foo);
    f2(&foo);

    // array case
    char arr[10 * sizeof(struct Foo)] = { 0 };
    f1(arr);
    f2(arr + sizeof(struct Foo));
    f3(arr + 2 * sizeof(struct Foo));
}
Can anyone offer any input here, or point me towards a good resource for this sort of thing?

qsvui
Aug 23, 2003
some crazy thing
My understanding is that compilers are allowed to assume pointers to fundamentally different types never alias, except for char*, void*, and other pointers to bytes. Since f1, f2, and f3 casts a char* (or void*) to a Foo* and then uses that Foo*, it would be undefined behavior. If it was the other way around (casting a Foo* to char*), I think it would be fine.

Here's a Stack Overflow link with excerpts from the C and C++ standards: https://stackoverflow.com/a/7005988

I don't know of any good specifically C resources, but this cppreference link also summarizes these rules (look under the type aliasing section): http://en.cppreference.com/w/cpp/language/reinterpret_cast

You should be able to find more information about this by looking up "type aliasing", "strict aliasing", or "type punning".

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS
please use -fno-strict-aliasing

VikingofRock
Aug 24, 2008




qsvui posted:

My understanding is that compilers are allowed to assume pointers to fundamentally different types never alias, except for char*, void*, and other pointers to bytes. Since f1, f2, and f3 casts a char* (or void*) to a Foo* and then uses that Foo*, it would be undefined behavior. If it was the other way around (casting a Foo* to char*), I think it would be fine.

Here's a Stack Overflow link with excerpts from the C and C++ standards: https://stackoverflow.com/a/7005988

I don't know of any good specifically C resources, but this cppreference link also summarizes these rules (look under the type aliasing section): http://en.cppreference.com/w/cpp/language/reinterpret_cast

You should be able to find more information about this by looking up "type aliasing", "strict aliasing", or "type punning".

Thanks. I have googled those things but it's still a little unclear. For example, it seems like you are allowed to "round-trip" things, casting a Foo * to a void * and back (and then using it). See e.g. this libgit2 function, intended to be used like this. But what if the pointer was never a Foo * to begin with, and was instead just some opaque bytes (perhaps read from a file)? And what if you use a char * instead, or first cast your void * to a uintptr_t in order to do math with it?

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS
Interpreting a sequence of bytes as a struct is like, C's core competency and gently caress if I'm gonna use a union to do it!!!

b0lt
Apr 29, 2005
alignment is the other problem you're going to run into, char[] isn't guaranteed to be aligned sufficiently for whatever you're interpreting it as.

VikingofRock
Aug 24, 2008




b0lt posted:

alignment is the other problem you're going to run into, char[] isn't guaranteed to be aligned sufficiently for whatever you're interpreting it as.

Hmmm, that makes sense. So if my main looked like this:

C++ code:
int main() {
    // simple case
    struct Foo foo = { 0, 0 };
    f1((char *) &foo);
    f2(&foo);

    // array case
    Foo * arr[10] = { 0 };
    f1(arr);
    f2(arr + sizeof(struct Foo));
    f3(arr + 2 * sizeof(struct Foo));

    // opaque bytes case
    char * bytes[10 * sizeof(struct Foo)] = { 0 };
    Foo * arr[10];
    memcpy(arr, bytes, sizeof(arr));
    f1(arr);
    f2(arr + sizeof(struct Foo));
    f3(arr + 2 * sizeof(struct Foo));
}
would that be okay (without any modification to f1(), f2(), or f3())?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

VikingofRock posted:

Thanks. I have googled those things but it's still a little unclear. For example, it seems like you are allowed to "round-trip" things, casting a Foo * to a void * and back (and then using it). See e.g. this libgit2 function, intended to be used like this. But what if the pointer was never a Foo * to begin with, and was instead just some opaque bytes (perhaps read from a file)? And what if you use a char * instead, or first cast your void * to a uintptr_t in order to do math with it?
Converting the pointer really doesn't matter in C. What matters is that if the compiler sees a memory access via a pointer to a struct type A, and a separate memory access via a pointer to struct type B, and neither is contained in the other, then it's allowed to assume that they don't point to the same memory.

The whole idea in the first place is built on the idea that structures of different types always occupy mutually exclusive memory regions, and the only exception that it allows is accesses via char pointers. As long as the memory regions don't actually overlap, pointer conversion doesn't matter at all. You can allocate memory as one type and use it as another type, it doesn't matter, it only matters if you try to access the same memory as two different types.

There are some exceptions built into most compilers, namely that accesses via a union may alias, and some stuff I forgot relating to structures starting with the same initial members.

Pointer conversion in C++ is a whole different can of worms, since pointer casts can cause the address to change, it has some special cases like dynamic_cast<void*>, and C-style casts may have different results depending on whether or not the classes being cast to and from are defined (which is part of why you should never use C-style cases in C++).

OneEightHundred fucked around with this message at 05:27 on Jan 25, 2018

Xarn
Jun 26, 2015

Jeffrey of YOSPOS posted:

please use -fno-strict-aliasing

No. Use memcpy.

b0lt
Apr 29, 2005

Xarn posted:

No. Use memcpy.

Better yet, do both, since you know you have code that's invalid with -fstrict-aliasing

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS

Xarn posted:

No. Use memcpy.
So you get a packet full of structs, your system has already copied it once out of the network card buffer, and you're gonna copy it again instead of interpreting it like the struct it is? Maybe I'm poisoned by my domain but that's a little insane to me in this, the language-for-interpreting-bytestrings-as-structs.

eth0.n
Jun 1, 2012

Jeffrey of YOSPOS posted:

So you get a packet full of structs, your system has already copied it once out of the network card buffer, and you're gonna copy it again instead of interpreting it like the struct it is? Maybe I'm poisoned by my domain but that's a little insane to me in this, the language-for-interpreting-bytestrings-as-structs.

Casting a pointer to char to a pointer to a struct is not a violation of strict aliasing. This is not where memcpy is needed.

nielsm
Jun 1, 2009



eth0.n posted:

Casting a pointer to char to a pointer to a struct is not a violation of strict aliasing. This is not where memcpy is needed.

But you might not have any alignment guarantees then.

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS

eth0.n posted:

Casting a pointer to char to a pointer to a struct is not a violation of strict aliasing. This is not where memcpy is needed.
But it seems very easy to require it, no? There's plenty of reasons you might need to actually access it as a character array first and then access it as a struct? Maybe I'm misunderstanding something about aliasing rules but my understanding was that, if you use it and access it as a character array, you cannot also access it as a struct, and thus should turn off the silly option.

Alignment is an issue on some platforms. GCC aligned types and attribute packed can help here, but yeah, the compiler will happily convert "aligned pointers" to normal unconstrained pointers without warning by default, so being careful when using those is warranted. In general you should know your own platform here and minimize unaligned accesses but trying to eliminate them completely is pretty extreme if you're, say, handling a network protocol that sends unaligned integers as part of normal operation.

Bonfire Lit
Jul 9, 2008

If you're one of the sinners who caused this please unfriend me now.

Jeffrey of YOSPOS posted:

So you get a packet full of structs, your system has already copied it once out of the network card buffer, and you're gonna copy it again instead of interpreting it like the struct it is? Maybe I'm poisoned by my domain but that's a little insane to me in this, the language-for-interpreting-bytestrings-as-structs.
Unless you disable intrinsics, your compiler is gonna turn that memcpy into a no-op anyway (at least if you memcpy to a local variable).

eth0.n
Jun 1, 2012

Jeffrey of YOSPOS posted:

But it seems very easy to require it, no? There's plenty of reasons you might need to actually access it as a character array first and then access it as a struct? Maybe I'm misunderstanding something about aliasing rules but my understanding was that, if you use it and access it as a character array, you cannot also access it as a struct, and thus should turn off the silly option.

Alignment is an issue on some platforms. GCC aligned types and attribute packed can help here, but yeah, the compiler will happily convert "aligned pointers" to normal unconstrained pointers without warning by default, so being careful when using those is warranted. In general you should know your own platform here and minimize unaligned accesses but trying to eliminate them completely is pretty extreme if you're, say, handling a network protocol that sends unaligned integers as part of normal operation.

You can just create it as a struct object, then recv into it directly. You can even access it via a char pointer if you need byte-level access for some reason, freely interleaved with accesses to the struct itself, at least without running afoul of strict aliasing.

Basically, undefined behavior is bad, and you should avoid it wherever possible. Turning off one compiler feature which might misbehave with that UB is a workaround for crap code, not a justification for writing new crap code. You keep talking about performance, but do you actually know if the performance gains you think you are getting are real compared to correct code, and that they aren't outweighed by the performance lost because the compiler loses optimization opportunities with that feature off? The strict aliasing rule exists so compilers can optimize in certain ways.

eth0.n fucked around with this message at 19:38 on Jan 25, 2018

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS
The behavior isn't undefined if your compiler defines how it will behave - conforming to ISO C in this case makes your code worse, not better. My real answer is that I agree with Linus that, while I understand how the aliasing rules came to be, I think they are boneheaded as written and ought to be worked around if you need to view the same memory in multiple ways. They could have specified the language such that the degradation of aliasing assumptions could be controlled by the user, but they didn't bother doing that. The ISO C spec is not a religious text, you're allowed to question it and I think it'd be irresponsible to manage any large C project without at the very least thinking carefully about discarding the existing aliasing rules, especially if what you're doing involves receiving binary data from external sources.

Casting between pointer types in your function is a good sign that the strict aliasing assumptions are not true within that function, my complaint pretty much would go away if the spec was written such that it gave up its aliasing assumptions once that happened.

The Phlegmatist
Nov 24, 2003

Jeffrey of YOSPOS posted:

Casting between pointer types in your function is a good sign that the strict aliasing assumptions are not true within that function, my complaint pretty much would go away if the spec was written such that it gave up its aliasing assumptions once that happened.

That would complicate compiler design since you'd be adding a "look for a cast between pointer types in this scope" to your parser. Easiest way would be to add a nostrict or norestrict (to mimic C) keyword specifier for functions that could tell the compiler to not use strict aliasing in that scope.

Submit a proposal to the C++ committee and see what happens!

qsvui
Aug 23, 2003
some crazy thing
I've also read that as of C99, writing an object to a union member and then accessing that object with the other member of the union is a safe operation. This is not true for C++ though.

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
The fact that the "reinterpret the opaque sequence of bytes as a different type" function is called "memcpy" is an unfortunate historical oddity, but it still does what you're actually trying to accomplish in most cases.

Essentially, you can write correct code that also happens to be optimally fast if your compiler works the way you expect it to, or you can write fast code that is only correct if your compiler works the way you expect it to. So what makes the second one preferable to you, again?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply