Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Beef
Jul 26, 2004

Carbon dioxide posted:

If that does cause problems, well every recursive function can be rewritten to not have recursion at all.

I think you're confusing things here. Only tail-recursive functions can be rewritten to not have recursion. Try writing a BFS/DFS without using a queue or stack.

Adbot
ADBOT LOVES YOU

Soricidus
Oct 21, 2010
freedom-hating statist shill

Beef posted:

I think you're confusing things here. Only tail-recursive functions can be rewritten to not have recursion. Try writing a BFS/DFS without using a queue or stack.

No you're the one who is confusing things. All recursive algorithms can be rewritten to not have recursion. Often this means using an explicit queue or stack, yes, but using a stack doesn't mean you're recursing.

The distinction is important because actual computers often impose limits on the depth of function calls even if there's plenty of memory available on the heap.

Beef
Jul 26, 2004
Wait, you actually believe that because you use the heap instead of the built-in stack, you magically make something not recursive?

The whole point of something being not recursive is not taking extra space, regardless of how many iterations you are taking.

Soricidus posted:

The distinction is important because actual computers often impose limits on the depth of function calls even if there's plenty of memory available on the heap.

Your computer is not aware of any stacks, heaps or function calls, this is a convention used by compiler/runtimes. Also, there is just as much stack available as there is heap, they grow towards eachother: stack grows down, heap grows up.

edit: I get your point that language runtimes limit stack depth, yes. Rewriting a function to use a manual stack instead of language runtime stack does not change that you are writing a recursive function.

Beef fucked around with this message at 11:27 on Feb 13, 2017

b0lt
Apr 29, 2005

Beef posted:

Wait, you actually believe that because you use the heap instead of the built-in stack, you magically make something not recursive?

The whole point of something being not recursive is not taking extra space, regardless of how many iterations you are taking.


Your computer is not aware of any stacks, heaps or function calls, this is a convention used by compiler/runtimes. Also, there is just as much stack available as there is heap, they grow towards eachother: stack grows down, heap grows up.

edit: I get your point that language runtimes limit stack depth, yes. Rewriting a function to use a manual stack instead of language runtime stack does not change that you are writing a recursive function.

Do you know what recursive means?

edit: also, your computer certainly is aware of "the stack" and function calls. x86 has dedicated machinery to accelerate certain operations on rsp/rsp, because they happen all the time. Function calls are also a thing that has tons of specialized optimization, because branches will take up all of your CPU time if you don't have sufficiently smart prediction.

There definitely isn't as much stack as there is heap unless you're using the most idealized babby model. Allocators haven't used brk for allocation exclusively since like 1970, and threads have a fixed stack size. The main thread's stack will almost definitely bump into another thread's stack or some arbitrary mmaped data way before it hits the "heap". (also, stack growing negatively is just what happens to be the behavior on most current systems. It could grow positively, or each stack frame could be in some completely arbitrary place.)

b0lt fucked around with this message at 12:47 on Feb 13, 2017

sunaurus
Feb 13, 2012

Oh great, another bookah.

Beef posted:

Wait, you actually believe that because you use the heap instead of the built-in stack, you magically make something not recursive?

The whole point of something being not recursive is not taking extra space, regardless of how many iterations you are taking.


Your computer is not aware of any stacks, heaps or function calls, this is a convention used by compiler/runtimes. Also, there is just as much stack available as there is heap, they grow towards eachother: stack grows down, heap grows up.

edit: I get your point that language runtimes limit stack depth, yes. Rewriting a function to use a manual stack instead of language runtime stack does not change that you are writing a recursive function.

So let me get this straight. You're saying that if I iterate over a stack in any algorithm, that means I'm using recursion?

sunaurus fucked around with this message at 12:50 on Feb 13, 2017

Soricidus
Oct 21, 2010
freedom-hating statist shill

Beef posted:

Wait, you actually believe that because you use the heap instead of the built-in stack, you magically make something not recursive?
Yes, I actually believe that. That's because a recursive function is one that calls itself. That's what the word means. If your code does not involve functions that call themselves, it is not recursive. Nothing to do with memory usage - if it was about non-constant memory then tail recursion wouldn't be recursion!

Beef posted:

Your computer is not aware of any stacks, heaps or function calls, this is a convention used by compiler/runtimes.
I was using "computer" as a shorthand for "complete system running on a computer, including the operating system and runtime libraries and other stuff that can affect how much recursion is permitted".

But you also might like to read a processor manual some time. The CPU knows more about stacks and function calls than you think.

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

Carbon dioxide posted:

Of course, there are fine solutions for each of your examples in modern languages. In the first, replace goto feh either with whatever it needs to do right there or with a function call. Then put a break statement after it.
In the second, well if you don't have tail calls you can do a regular recursive call on the function - the optimization loss in minimal in most cases on modern systems. If that does cause problems, well every recursive function can be rewritten to not have recursion at all.

You have not convinced me that goto has any uses outside languages such as C and assembly that require/allow you to do low level memory manipulation yourself.

Well anything written with goto could be rewritten not to use it, so just pointing out that there are other ways to write these isn't saying much. How do your suggestions make the code clearer? Also I think it's a bit ridiculous to replace an O(1) space complexity algorithm with an O(N) one just to avoid a goto.

Doom Mathematic
Sep 2, 2008
The point is, ladies and gentleman, that goto, for lack of a better word, is good. goto is right. goto works. goto clarifies, cuts through,

Beef
Jul 26, 2004

sunaurus posted:

So let me get this straight. You're saying that if I iterate over a stack in any algorithm, that means I'm using recursion?

No, of course not. My point was the other way around. If you are rewriting a recursive process using an explicit stack (without involving self-calling functions), you are still doing recursion:
- you are still allocating/using memory linear to the call depth,
- unwinding/popping your stack on function return and
- using the popped result to do some processing.

Soricidus posted:

Yes, I actually believe that. That's because a recursive function is one that calls itself. That's what the word means. If your code does not involve functions that call themselves, it is not recursive. Nothing to do with memory usage.

Right, you are talking about syntax. I was talking about the actual recursive process.

SICP posted:

In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive process with the notion of a recursive procedure. When we describe a procedure as recursive, we are referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive, we are speaking about how the process evolves, not about the syntax of how a procedure is written. It may seem disturbing that we refer to a recursive procedure such as fact-iter as generating an iterative process. However, the process really is iterative: Its state is captured completely by its three state variables, and an interpreter need keep track of only three variables in order to execute the process.

The whole distinction between iteration and recursion is because of resource usage. That is why it is useful of talking about something being recursive or iterative: rewriting your recursive-function-style DFS as a while loop does not suddenly guarantee you can run it on arbitrarily large trees. It means you will run into your heap-limit before your stack-limit (which granted, is much larger: ulimit -s shows 8M on my machine, KMP_STACKSIZE shows 4M, our cluster nodes have unlimited stack size for some reason).
Just talking about syntactic recursion is not that useful: GCC happily compiles away tail-recursive functions, Scheme even guarantees it in the language spec.

Soricidus posted:

if it was about non-constant memory then tail recursion wouldn't be recursion!

Exactly!


Soricidus posted:

But you also might like to read a processor manual some time. The CPU knows more about stacks and function calls than you think.

Yeah, I do, frequently. I work at the micro-architecture level. The push/pop in x86 used to be a bigger deal in the 80's, which is why most people still think that. Granted, there is still some extra call-stack hardware in some microarchitectures, but it will not give you a big performance boost compared to doing the loads/stores yourself. I was wrong to say that the processor knows nothing about the call stack, yes, but I say that because in practice there is very little left compared to what people think. There is the address-return push/pop, the optimization guide will tell you to avoid the other stack-related instructions (e.g. enter/leave, task switching).

Beef fucked around with this message at 17:17 on Feb 13, 2017

sarehu
Apr 20, 2007

(call/cc call/cc)

Carbon dioxide posted:

Of course, there are fine solutions for each of your examples in modern languages. In the first, replace goto feh either with whatever it needs to do right there or with a function call.
It does nothing right there... so (*translates your response into something that could make sense*) you want to lift the rest of the function body after the label feh into a function call? That could require janitoring a lot of parameters around. This is a simplified example and your strategy doesn't scale. You're making more work for yourself and have to blow your mind's stack to do this while keeping track of whatever else you were doing.

Carbon dioxide posted:

You have not convinced me that goto has any uses outside languages such as C and assembly that require/allow you to do low level memory manipulation yourself.
I had a use for it the other week.

eth0.n
Jun 1, 2012

Beef posted:

The whole distinction between iteration and recursion is because of resource usage.

The term "recursion" itself says nothing about "resource usage". It means self-referential definition.

Sure, when applied to functions in computer programming languages, it is often implemented in a way that runs into significant resource limits (relatively small call stacks), but if you refactor a recursive implementation into an iterative one that does not call itself, then no, it's not recursion anymore. Even if the algorithm is essentially identical, and you literally just turn the call stack into an explicit one. Recursion refers to how something is defined, not the run-time behavior a compiler comes up with.

The main difference between function recursion, and reimplementation using an explicit stack is its much easier to make the latter safe. It's hard to detect when a call will blow the built-in stack. With the latter, we know how much is available, and how much is used, and typically, we can allocate much more space to it.

Beef
Jul 26, 2004

eth0.n posted:

The term "recursion" itself says nothing about "resource usage". It means self-referential definition.

You are conflating the english definition with the computer science terminology.

wikipedia posted:

Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances of the same problem (as opposed to iteration).[1]

[1]Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

Soricidus
Oct 21, 2010
freedom-hating statist shill

Beef posted:

You are conflating the english definition with the computer science terminology.

Nope. We're using the word in the standard sense in which it's used in the field of professional software development, which -- unlike computer science -- is the subject of this thread.

Beef
Jul 26, 2004
Yeah, programming has nothing to do with computer science, nice.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Beef posted:

Yeah, programming has nothing to do with computer science, nice.

Slightly hyperbolic, but essentially correct.

Computer science is an academic discipline. Programming is a trade skill.

nielsm
Jun 1, 2009



You can just as well talk about recursive data structures, a canonical example being a singly-linked tree. If you want to operate on the tree structure your algorithm will by necessity require O(depth) working storage, regardless of whether you implement it by recursive function definition or a non-recursive manual stack.

Of course if you make the tree doubly-linked (nodes also point to their parent, not just their children) you can do away with recursion/stack-keeping, but that's just trading additional static storage away for saving some dynamic storage.

Beef
Jul 26, 2004

Jabor posted:

Slightly hyperbolic, but essentially correct.

Computer science is an academic discipline. Programming is a trade skill.

Indeed. The increasing split between the two has been a source of considerable frustration for me. Both in academics not being able to program their way out of a paper bag and developers not understanding underlying concepts.

xtal
Jan 9, 2011

by Fluffdaddy
Good lord that's the last time I mention goto

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

xtal posted:

Good lord that's the last time I mention goto

That's the real reason goto is considered harmful.

Beef
Jul 26, 2004

nielsm posted:

You can just as well talk about recursive data structures, a canonical example being a singly-linked tree. If you want to operate on the tree structure your algorithm will by necessity require O(depth) working storage, regardless of whether you implement it by recursive function definition or a non-recursive manual stack.

Of course if you make the tree doubly-linked (nodes also point to their parent, not just their children) you can do away with recursion/stack-keeping, but that's just trading additional static storage away for saving some dynamic storage.

Good example. You are essentially embedding your stack in your data-structure in the doubly-linked case (O(depth) extra pointers), but it can be worth it because you can use it to speed up different operations as well (e.g. rebalancing, lock-free changes).

omeg
Sep 3, 2012

I only write low level stuff in C nowadays. I've seen
code:
do {
   ...
   if (error) break;
} while (0); 
used for error handling. Thoughts? :v:

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe

omeg posted:

I only write low level stuff in C nowadays. I've seen
code:
do {
   ...
   if (error) break;
} while (0); 
used for error handling. Thoughts? :v:

Doesn't work for nested loops; goto is superior. :v:

omeg
Sep 3, 2012

TooMuchAbstraction posted:

Doesn't work for nested loops; goto is superior. :v:

Haven't seen a nested loop in my code in a long time :v:

Beef
Jul 26, 2004
Goto discussion and no one mentioned Linus?

Incidentally, the code at the bottom of that thread looks very similar as Apple's goto fail.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe

Beef posted:

Incidentally, the code at the bottom of that thread looks very similar as Apple's goto fail.

This is a "omitting curly braces from code blocks considered harmful" story, not a "goto considered harmful" story.

Dylan16807
May 12, 2010

Beef posted:

quote:

Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances of the same problem (as opposed to iteration).
In many instances that boils down to how you decide to view the function, not what it actually does. If I loop over an array I can view that as iteration, or I can view that as recursively reducing the problem to an array with one less item in it. So I don't think that definition is a very practical one.

hobbesmaster
Jan 28, 2008

In many instances that boils down to how you decide to view the function, not what it actually does. If I loop over an array I can view that as iteration, or I can view that as recursively reducing the problem to an array with one less item in it. So I don't think that definition is a very practical one.
[/quote]

It's as if an algorithm could be expressed either way! :v:

Dylan16807
May 12, 2010

hobbesmaster posted:

It's as if an algorithm could be expressed either way! :v:

I mean beyond that. The exact same expression, exact same code, can be interpreted as "recursive" or "not recursive". Some definitions are more helpful than others for actually making a dividing line, and that definition isn't very helpful.

JawnV6
Jul 4, 2004

So hot ...

Beef posted:

Yeah, I do, frequently. I work at the micro-architecture level. The push/pop in x86 used to be a bigger deal in the 80's, which is why most people still think that. Granted, there is still some extra call-stack hardware in some microarchitectures, but it will not give you a big performance boost compared to doing the loads/stores yourself. I was wrong to say that the processor knows nothing about the call stack, yes, but I say that because in practice there is very little left compared to what people think. There is the address-return push/pop, the optimization guide will tell you to avoid the other stack-related instructions (e.g. enter/leave, task switching).
What, pray tell, is work at the "micro-architecture level"? Like do you know when the CRSB was augmented with a RRSB or when the zero-length call optimization was put in? Or are you just referring to "i read a MSR and checked the value" as being some hot poo poo?

hobbesmaster posted:

When you're implementing a flow control feature not in your language. For example, try/throw in C.
Nah, setjmp/longjmp do this better.

Fergus Mac Roich
Nov 5, 2008

Soiled Meat
Relevant SICP passages on recursion vs iteration

quote:

In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive process with the notion of a recursive procedure. When we describe a procedure as recursive, we are referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive, we are speaking about how the process evolves, not about the syntax of how a procedure is written. It may seem disturbing that we refer to a recursive procedure such as fact-iter as generating an iterative process. However, the process really is iterative: Its state is captured completely by its three state variables, and an interpreter need keep track of only three variables in order to execute the process.

One reason that the distinction between process and procedure may be confusing is that most implementations of common languages (including Ada, Pascal, and C) are designed in such a way that the interpretation of any recursive procedure consumes an amount of memory that grows with the number of procedure calls, even when the process described is, in principle, iterative. As a consequence, these languages can describe iterative processes only by resorting to special-purpose looping constructs "looping constructs" such as do, repeat, until, for, and while. The implementation of Scheme we shall consider in chapter 5 does not share this defect. It will execute an iterative process in constant space, even if the iterative process is described by a recursive procedure. An implementation with this property is called tail-recursive. With a tail-recursive implementation, iteration can be expressed using the ordinary procedure call mechanism, so that special iteration constructs are useful only as syntactic sugar.31

Not that SICP has some special authority on the subject, but the distinction being made seems very clear and practical to me.

hobbesmaster
Jan 28, 2008

JawnV6 posted:

Nah, setjmp/longjmp do this better.

If only everyone did this.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

I'm late to the party, but: if someone writes a literal goto in TYOOL 2017, they are almost certainly aware that it's strongly discouraged and they almost certainly have a reason why they used goto. It probably won't be a sufficient reason, but at least they did give it some thought even if they came up with a poor solution.

What is much more dangerous in TYOOL 2017 is the lovely try/catch pattern used for perfectly non-exceptional situation that is actually just a thinly-disguised goto and tightly couples together parts of your code that shouldn't even know about each other's existence.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)

JawnV6 posted:

or when the zero-length call optimization was put in?

I have no idea what the answer to this is but I gotta wonder what's making all of these zero length calls? Like you're referring to the thing where you fetch rip right?

Good Dumplings
Mar 30, 2011

Excuse my worthless shitposting because all I can ever hope to accomplish in life is to rot away the braincells of strangers on the internet with my irredeemable brainworms.
what is the "developer's definition" of the color red because holy poo poo I'm glad I don't have to work with you "my way or the highway" guys

Hughlander
May 11, 2005

Good Dumplings posted:

what is the "developer's definition" of the color red because holy poo poo I'm glad I don't have to work with you "my way or the highway" guys

0xff0000... DUH.

JawnV6
Jul 4, 2004

So hot ...

dougdrums posted:

I have no idea what the answer to this is but I gotta wonder what's making all of these zero length calls? Like you're referring to the thing where you fetch rip right?

It's the only way on x86 to retrieve the current PC architecturally.
code:
  call L0
L0:
  pop [e|r]ax
Now, if you've got a polite interest in making call/ret performant, you store the return address for each call in the predictor somewhere. That call won't ever have a matching ret. But if you naively store that return address and predict it for the next ret, you'll get off by one and screw up every prediction after that.

Honestly this is the sort of thing I'd be surprised that software folk had any awareness of whatsoever, but anyone working at the "micro-architectural level" would surely have some hint of it.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

JawnV6 posted:

It's the only way on x86 to retrieve the current PC architecturally.
code:
  call L0
L0:
  pop [e|r]ax
Now, if you've got a polite interest in making call/ret performant, you store the return address for each call in the predictor somewhere. That call won't ever have a matching ret. But if you naively store that return address and predict it for the next ret, you'll get off by one and screw up every prediction after that.

Honestly this is the sort of thing I'd be surprised that software folk had any awareness of whatsoever, but anyone working at the "micro-architectural level" would surely have some hint of it.

We compiler folk know about it, but we are a willsome and belligerent people, best avoided.

Beef
Jul 26, 2004

JawnV6 posted:

What, pray tell, is work at the "micro-architecture level"? Like do you know when the CRSB was augmented with a RRSB or when the zero-length call optimization was put in? Or are you just referring to "i read a MSR and checked the value" as being some hot poo poo?

Calm down, I'm not saying that for dick-waving purposes.

I honestly have no idea what CRSB/RRSB is, no.

The issue came up during the recursion discussion, zero-length calls do not really apply. (Unless you are using a recursive function to do a while loop? I don't know, someone might call me on a technicality again.)

In the context of this reply to Sahero's post:

quote:

In the second, well if you don't have tail calls you can do a regular recursive call on the function - the optimization loss in minimal in most cases on modern systems.
I think you will agree that the benefit from treating return addresses differently pales in comparison to having to populate stack frames, making a manual tail-call optimization using a goto totally legit.

necrotic
Aug 2, 2005
I owe my brother big time for this!
So is PHP good for adding goto? Asking for a friend.

Adbot
ADBOT LOVES YOU

canis minor
May 4, 2011

You can write goto in anything, even in JS!

https://alexsexton.com/blog/2009/07/goto-dot-js/

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply