Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
code:
for ( eof = 0, skip = 0; ; )
{
    // read exactly one block or multiple
    for ( numchar = 0; numchar < blksize; numchar += n )
    {
        n = read( fdin, &buf[ numchar ], blksize - numchar );
        if ( n < 0)
            err_sys("read error");
        if ( n == 0 )
        {
            eof = 1;	// end of file
            break;	// if n = 0 terminates the execution the nearest loop
        }
    }

//check if we can skip this part 
if ( numchar == blksize )
{
    //linear search for a byte other than null 
    for ( n = 0; n < blksize; n++ )
        if ( buf[ n ] != null )
            break;
    if ( n == blksize )
    {
        skip += blksize;		//skip = skip + blksize
        continue;				// if n is equal to blksize  passes control
                            // to next iteration
    }
}

// lseek over the null bytes 
if ( skip != 0 )
{
    // keep one block if we got eof( to write the last block)
    if ( numchar == 0 )
    {
        skip -= blksize;		// skip = skip - blksize
        numchar += blksize;		// numchar = numchar + blksize
    }

    i = lseek( fdout, skip, SEEK_CUR );	// lseek the null bytes
    if ( i < 0 ) 
        err_sys("lseek error");
    skip = 0;
}

/* write exactly the number of characters */
for ( n = 0; n < numchar; n += i )		// jump the hole to cont 
{
    i = write(fdout, &buf[ n ], numchar - n );
    if ( i < 0)				// if i < 0 can't write
        err_sys("write error");
}

if ( eof ) 	//end of file
    break;
}
It's an assignment for an intro C class, use lseek to copy a file with holes in it (without expanding the file). Highlights: abuse of the for statement, use of the continue statement, I don't even know what the intention of the read loop is, and unnecessary use of flags. This person will have passed at least 2 semesters of C# and one semester of SPARC assembly.

Paul MaudDib fucked around with this message at 06:26 on Feb 1, 2011

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
So the verdict is back on the Toyota runaway acceleration problem. I remember saying at the time, "It's a problem with the throttle firmware".

It's a problem with the throttle firmware.

The paragraph that jumped out at me as the worst:

quote:

The Camry ETCS code was found to have 11,000 global variables. Barr described the code as “spaghetti.” Using the Cyclomatic Complexity metric, 67 functions were rated untestable (meaning they scored more than 50). The throttle angle function scored more than 100 (unmaintainable).

Toyota loosely followed the widely adopted MISRA-C coding rules but Barr’s group found 80,000 rule violations. Toyota's own internal standards make use of only 11 MISRA-C rules, and five of those were violated in the actual code. MISRA-C:1998, in effect when the code was originally written, has 93 required and 34 advisory rules. Toyota nailed six of them.

Barr also discovered inadequate and untracked peer code reviews and the absence of any bug-tracking system at Toyota.
http://www.edn.com/design/automotive/4423428/Toyota-s-killer-firmware--Bad-design-and-its-consequences

:staredog:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Slashdot really dislikes the idea (libertarians) but I think lawsuits based on reckless development will probably help there. It's a life-critical system, and Toyota blew off the prevailing standard at the time. 7 years to comply with the tests is not unreasonable, and I'm sure bug trackers existed by 2005. Hopefully they will recall the controllers.

There ought to be some equivalent of Professional Engineers for software systems. Someone authorized to sign off (or not) on the accuracy of the design and process with their name, perhaps with differing levels of credentials. A union is certainly a historic model for that. Would Three-Phase sign off on this system?

Paul MaudDib fucked around with this message at 01:58 on Oct 30, 2013

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Manslaughter posted:

Because, as is now apparent, anyone can replace you - and even in a company as big as Toyota, where your job is as important as making sure cars dont loving drive off without input from their driver.

Actually there's a magic key combo, the trick is to release the brake for a bit which restarts or un-deadlocks the gas pedal task.

Cars of the future...

Paul MaudDib fucked around with this message at 02:19 on Oct 30, 2013

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

My Rhythmic Crotch posted:

Which brings us to ... testing. The logistics of these treatment centers made testing nearly impossible. Contracts stipulated that the customer had full access to the center for something like 18 hours a day, 6 days a week. And during that time, only production code was allowed. So that left us with basically nights and weekends for testing software. The cyclotron is a fickle beast, and extracting beam during those small windows of time (I'm just a software guy, not a cyclotron expert) can be really loving tricky. So some features would get tested for maybe only an hour or two at one center before being given the final blessing and put into production.

I'm pretty sure my sister was treated on one of your machines in Indiana. You'll be pleased to know that you don't appear to have murdered her.

Sadly I never got a tour of the cyclotron :(

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

evensevenone posted:

That false economics spreadsheet was revealed because people spent years trying to take the same historical data, do their own analysis using the methods proposed by the paper, and not getting the same results. There were even papers published about the failure to reproduce. If they had just handed out the spreadsheet, would people have actually gone line by line and made sure all the formulas worked?

Formulas are just rudimentary source code, so if you're asking whether people would have examined source code in order to debug an unexpected result the answer is yes.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Athas posted:

Yes, one of my friends did his masters thesis on embedded GPU programming languages in Haskell. According to him, it was a nightmare even getting the stuff to build and run, and Haskell isn't even that of a tricky language to compile. I think the issue is that research code often depends on obscure libraries or crazy setups.

That actually sounds pretty interesting. I always thought something like Erlang might perform well on GPUs.

Yeah, research by definition tends to be stuff that doesn't have nice libraries already pre-packaged for you to use. Otherwise it's not novel and worth publishing.

Weird special-purpose compilers for specific devices (like GPUs) tend to be pretty bad all on their own. I've been working with CUDA a bit, and without even diving into anything too technical like that there's just a ton of rough edges. Nothing show stopping, just annoying. Some of them probably edge pretty close to coding horrors, like the compiler not supporting linking device code across files by default (everything needs to go in one big file).

You can actually do it using more recent revisions of compilers and cards if you use some specific compiler flags, but it's not portable. The excuse given is that nvcc "isn't a compiler, it's a front-end" but that seems like BS. Even if the compiler is garbage and needs one big file to work with, you could parse separate source files together into one big file at compile time and then just compile that, you don't have to make the programmer handle one enormous file or do silly workarounds.

Another annoying thing, it won't let you allocate memory shared between threads using the "static" syntax unless it's sized by a compile-time define-constant. It doesn't handle the overwhelmingly common use-case where the non-constant array size is the number of threads (passed in by a struct at run-time) and instead you have to write the equivalent code yourself using "dynamic" syntax. That also probably means giving up on some opportunities for loop unrolling and other things, since you're throwing away information about data size and arrangement. It's probably rooted in the "stack vs heap" thing on CPUs but that doesn't really translate to the specific domain of __shared__ gpu memory.

Anecdotally I've heard that other kinds of embedded compilers tend to be really awful too.

Paul MaudDib fucked around with this message at 21:22 on May 20, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Harik posted:

Before I used it I guess it didn't have a linker? I'm guessing here, because every legacy project I've had to clean up uses #include "otherfile.c" to make one massive superfile instead of linking the separate objects together.

So you have something like this?

code:
shared.cu:
#pragma once

//any shared symbols or code here
code:
specific_task1.cu:

#include "shared.cu"
...
with a makefile like this:

code:
nvcc specific_task1.cu specific_task2.cu -o myProgram
Or a main file that in turn includes specific_task1 and specific_task2?

I think that might be the most :effort: way to get around the no-linking rule while still generating code targeting low-spec devices. Never occurred to me to try abusing the include system. :unsmigghh:

CUDA also has some dumb things that fail silently too. I spent a half day trying to figure out why something wasn't working, then narrowed it down to the constants not showing up in memory right. Turns out I was using cudaMemcpy instead of cudaMemcpyToSymbol (which is the only valid way to set constant memory), so it wasn't doing anything and both the compiler and the runtime were just letting it happen. :suicide:

Also it is 100x slower to compile than the equivalent CPU code. I use the template library in some places, but c'mon.

One extra horror is that in the past cudaMemcpyToSymbol didn't even accept a real symbol as a target, instead you had to give it a cstring with the symbol name and it would be targeted at runtime or compile-time or something. So ctrl-h was probably one of the more efficient ways to rename constant symbols.

Paul MaudDib fucked around with this message at 22:27 on May 20, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Athas posted:

Erlang uses task parallelism, not data parallelism, so it's not a good fit for GPUs. Great fit for clusters, though.

But you could apply data parallelism to each task. The whole idea of functional programming is to get away from specifics about how the data is computed. Applying a "gather symbols, process tokens, output symbols" model would let you process a large number of tokens in parallel (with higher latency). The fact that functional algorithms are stateless helps because persisting state is relatively expensive and/or not possible. Message-passing provides an easy framework for implementing the necessary inputs/outputs. Use global memory for high-level tokens (major function calls), and shared/local memory as a low-latency scratchpad for intermediate processing.

Obviously the internals of the VM would be totally different and at some point it stops being "Erlang" and starts being something else, but message-passing functional programming seems to have characteristics that would fit well in the GPU model.

The overall goal in such a thing would be to implement general-purpose computation in ways that aren't horribly inefficient. CUDA programs are really dependent on the host to oversee everything (kernel launches, etc) and the tools to synchronize threads on the device are weak. Some things (like malloc) are just not available inside device code and wouldn't work well anyway (given memory coalescing, etc). In addition to that, CUDA does really badly when threads in an algorithm diverge and do different things (all cores execute all code paths but some are masked to inactive). It would be nice if GPUs had an approach that could handle non-trivial amounts of divergence while not totally sucking, and implementing algorithms as a series of tasks to process seems like a reasonable way to get big batches to work on.

Dynamic parallelism addresses a few of these limits, but it's not a silver bullet, and it's not commonly available.

Paul MaudDib fucked around with this message at 01:06 on May 21, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Soricidus posted:

Email address validation with a regex doesn't get you anything. It's very likely you'll reject valid addresses, and likewise it's very likely users will input "valid" addresses that have typos in them. The way to validate an email address is to accept whatever the gently caress the user types and then send it an email with a validation link in it.

Yeah, I've had valid email addresses rejected in some rather highbrow sites. I had a university email address that was formatted first.m.last@university.edu and Amazon wouldn't let me register that for Amazon Student. Not exactly a weird scheme for an educational email address. It wasn't the first time that happened, either, at one point I got fed up and asked the university to change to first.last but no dice, ADDRESSES ARE WHAT THEY ARE :bahgawd:

Gmail +tag filtering sounded really cool until I realized that the number of sites that accept those email addresses as valid can be counted on one hand.

I really wish more programmers would just take that advice. Filter any escape characters, send a validation email, if the link gets clicked it's valid, end of story.

Paul MaudDib fucked around with this message at 20:47 on Jul 19, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Volmarias posted:

Did you ever contact Amazon? It seems like a needlessly specific requirement.

Yeah, they fixed it within a week or two (circa 2010)

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Westie posted:

Muenster.

Yeah, the standard way to transliterate umlauts into alpha characters is to follow the umlaut'ed character with an e, so Münster becomes Muenster. The postal service will figure it out from there.

See also, the ess-tset, which is transliterated as "ss". So "Fuß" becomes "Fuss".

Really the problem isn't the transliteration though, it's the fact that the software is requiring validated addresses. I've run into that sometimes with cities and zip codes, places like universities tend to have their own mail systems with their own zip coding or +4 coding and those tend not to be in the databases used to validate addresses. It's gotten better over the years, most electronic addressing systems will now try to auto-correct the address to the closest thing it can validate but also let you override it and put in a different address if you're really sure.

Paul MaudDib fucked around with this message at 18:31 on Jul 21, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Lysidas posted:

btrfs is in good hands:

Nick Krause is clearly a dumbass. Yeah, google his email and you come up with one bad, untested patch after another.

That said, page_cache_release seems like a dumb name for a method that doesn't actually release page cache. If it releases a reference to the page, why not name the method release_page_cache_reference instead?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE



quote:

If we now divide the number of comments in a subreddit containing a chosen word by the overall subreddit comment count (and multiply by 10000 to have a nice integer value), we get more ... well, diagrams.

https://github.com/Dobiasd/programming-language-subreddits-and-their-choice-of-words

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

ExcessBLarg! posted:

It sounds like the perks that Google is really offering is convenience, as well as subtlely trying to encourage everyone to work longer hours.

That's always been my understanding of the calculus in offering big "lifestyle" perks. The free food and ballpit are there to keep you from caving to your human needs while you're pulling long hours at a desk.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SupSuper posted:

Their wiki is something else.

When all you have is a hammer, you implement an operating system in server-side javascript.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Symbolic Assignment Considered Harmful
--programmers, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

ErIog posted:

Most of the ire toward Unity in the game dev community is somewhat recent, though. So I wouldn't really blame anyone for having picked it for their project that's been in development for a long time. If a dude's got time to decompile and bitch about what he finds online then the Unity engine itself a much more target-rich environment for horrors.

Honestly it's still great for what it is. It's My First Game Studio, and it does a pretty OK job of balancing user friendliness with capability.

Pretty much every product has some rough corners, the question is total time relative to the alternative. My understanding is that building your game on something like Unreal is going to involve a lot more effort and up-front investment.

Paul MaudDib fucked around with this message at 23:29 on Sep 22, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Subjunctive posted:

People who use the word "kernel" outside of actual privileged-operating-system-code are usually trying to excuse cleverness that doesn't pay for itself, or otherwise indulge their illusions of Kernighan-ness. Beware.

Or they're CUDA developers.

quote:

2.1. Kernels

CUDA C extends C by allowing the programmer to define C functions, called kernels, that, when called, are executed N times in parallel by N different CUDA threads, as opposed to only once like regular C functions.
http://docs.nvidia.com/cuda/cuda-c-programming-guide/#kernels

NVIDIA couldn't stop themselves from using the word that means "privileged OS code" everywhere else to name "random program that runs on the GPU".

Paul MaudDib fucked around with this message at 16:17 on Oct 15, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Blotto Skorzany posted:

Oh no, the cancer has spread to signal processing textbooks! :ohdear:



I guess I don't see how we got from "a square matrix used to convolve an image" to "an arbitrary program executed on a GPGPU processor". One refers to data processed by a program (fixed pipeline or no), the other refers to the program itself. The convolution operation is the same no matter what matrix you feed it.

I guess it's not a horror since different kernels do different things, but it's not quite the same thing. You're basically calling a matrix a function because different matrixes produce different results when you stick them inside some other operation. Except in this case it's the other way around, the term "kernel" in the convolution sense was there first, and later appropriated.

Unless it comes out of GPGPU originally being a hack of the FF pipeline. Did people write GPGPU programs using convolution kernels rendering onto a bitmap or something like that? I know they did with shaders...

Paul MaudDib fucked around with this message at 02:21 on Oct 18, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Electronic Health Record systems are the worst :negative:

The documentation is scarce, incomplete, and outright wrong in places. The EHR itself runs on a browser version that is slated to EOL soon, and has an insane list of runtime requirements (eg needs to run as the real Administrator superuser, not just an admin-privileged account). And it looks there's no EHR db-level audit records from the API and it would be almost trivially easy to dump all the PHI in the DB.

Yesterday, I found this gem in one of our classes (from memory):

code:

int getAge(personId)
{
	Date d = getDateOfBirth(personId);
	Date now = new Date();

	int age = 0;
	while(d < now)
	{
		d.addYear();
		age++;
	}

	return age;
}
:negative:

Paul MaudDib fucked around with this message at 09:00 on Nov 26, 2014

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Skuto posted:

Java Date is millisecond accurate, so ignoring day/month is not the problem. It's just running the loop one time too much.

I think it's broken on your birthdays because of < instead of <=?

Edit: I guess for most ways of initializing the birth date it won't matter.

Maybe I work in Korea, you guys :ninja:

I'm not copying/pasting direct code, so it's just a quick, possibly incorrect summary from memory. The loop was the funny part, don't get too hung up on the details.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Jabor posted:

The biggest issue with crypto is that crypto code can be completely, horrendously broken and still appear to work fine. The first time you're likely to learn of a flaw in your crypto code is when someone exploits it to gain unfettered access to whatever is being protected, at which point it's a little late to be closing the barn door.


Yup. When you get into writing crypto you have to start thinking about things like timing attacks and sidechannel attacks (all paths through your routines need to take the same amount of time and processing, consume the same amount of electricity, produce the same amount of emissions, etc).

Your crypto can in fact be using a properly-designed secure cipher, using a programmatically-sound implementation, and still be completely vulnerable.

Paul MaudDib fucked around with this message at 17:20 on Jan 16, 2015

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply