Placeholder Image

字幕列表 影片播放

  • [There's] a lot of interesting stuff both from the point of view of the content but also

  • the historical context between, y' know "When were `for' loops invented?". Well

  • that's what Algol called them but prior to that FORTRAN called them DO loops.

  • And prior to that they existed in assembler. So, first of all, what's the history and

  • what does it get you when you can do loops, but when do you run out of steam,

  • even with loops, and you have to use this shock! horror! Pure Mathematicians thing -

  • that computer scientists have to learn about - recursion?! It was a real culture

  • shock, it really was, in the roughly 1940s, 1950s to suddenly find

  • out that what the theoreticians had been drivelling on about for years - about recursive

  • functions in mathematics - actually was of massive, massive importance for computer

  • science. Back in the '40s and early '50s it was all Assembler - or a

  • slightly dressed-up thing called a macro assembler, where you can have little

  • routines full of, y' know, packaged assembler instructions which could be

  • called up, as and when needed. So, that sort of served people for quite some

  • time. But probably one of the first high-level languages to introduce loops

  • was good old FORTRAN [shows textbook]. Even though that was published in '65 Fortran itself goes

  • back, I think, for almost ten years before that. It was invented by John Backus and

  • a large team of people at IBM in the 1950s. Many of you will know it. It's an

  • excellent language for engineering and scientific calculations. It is low level.

  • I mean, when you look at the nature of a FORTRAN loop it's almost like doing it

  • in assembler - but not quite. They didn't call them for loops - they called them DO loops.

  • What I'm saying here is - you package all this up - where you're saying

  • repeat the following sequence of instructions, which I've done with my

  • wavy lines here. Keep doing them until you hit the statement with a numeric

  • label on it of 180. The loop back from the statement labelled 180, back up to

  • here to increment the loop counter, which you're all familiar with in languages

  • like C. It wasn't done, as it would be in C, by saying: "Here's my block of

  • stuff to be repeated it's inside these curly braces". Here you can see it's a lot

  • more like assembler, a lot more low-level. I mean there's nothing magic about "180"; it could

  • be "72"; it depended on your labelling system. Implicitly here, in a simple

  • thing like this, you'd start off [with the counter] at one and every time I returned back here it would

  • reset [the counter] to be 2, 3, 4 and so on up to and including 10. It's comforting for

  • those who were coming from assembler into a higher-level language to see

  • something that was only slightly higher level, in sophistication, than assembler was.

  • How did loops become more "powerful", if you like?

  • Well, again, even in assembler and even in FORTRAN, there's no reason why you

  • couldn't have a loop within a loop. So I might have, outside of all this code, yet

  • another layer of DO. What shall we say: "DO 200 J = 1, 20". So, there might

  • be some more statements between 180 and 200, who knows, but again, you see, a

  • numeric label. And can see what's happening is that for every setting of J,

  • which will start at 1 and go up to 20, for every single one of those J settings

  • the inner loop will be running through the complete spectrum of settings of I

  • going from 1 to 10. So you will have 200 locations [that] are being affected here.

  • Basically going through the rows and columns of a matrix. All sorts of

  • calculations in physics, chemistry and particularly engineering just rely on

  • two-dimensional arrays full of numbers - either integers or scientific numbers

  • with a decimal point. and so on. Even hard-core

  • assembly programmers had to admit if you were doing heavy scientific programming it was

  • nice to be a little bit more abstract and to have this sort of facility

  • available to you. Now you might say: "Well, what came along to spoil the party then ?"

  • or "How did people realize that this was wonderful but not quite enough?"

  • The compiler of course has got to be tolerant and has got to be capable of

  • compiling nested DO loops correctly but how deep would it let you nest them?

  • Well, I'm guessing, I would suspect that the early FORTRAN compilers probably

  • wouldn't allow you to go more than about 10 deep, maximum. And I think you and I

  • Sean have just been looking up what are the current limits in C? I seem to remember

  • the earliest `gcc' was something like 32 But Ithink we looked up this ... some C++

  • nowadays allows you to do nested loops 256 deep! And, of course, there are

  • multi-dimensional problems that might actually need that, because it it doesn't

  • take much knowledge of higher maths to realize if you've got a loop within a loop

  • the outer loop goes around n times; the inner loop is going around n times, you

  • are then coping with an n-squared problem. If you put the third loop inside

  • the other two you're coping with a cubic, three-dimensional, problem. So what we're

  • saying is all these multi-dimensional polynomial-going-on-exponential problems,

  • that come up quite naturally, you can cope with them in nested for-loops so

  • long as they don't need to be more than power-32 or power-256 or whatever it is.

  • And you think, well, that should be enough for anybody! There's these multi-dimensional

  • problems you can just do them by nesting `for' loops and surely [a depth of] 256 is

  • enough for anybody? What kind of problem wouldn't it be enough for? Well, a lot of

  • theoretical computer scientists of my knowledge amused me greatly when - those of

  • them that will own up to this - back in the 60s. People started going to lectures

  • from mathematicians, theoreticians, people concerned

  • with "Godel Computability" and so on. And of course, those sort of people, were very

  • familiar indeed, at a mathematical level, with Ackermann's function. Now, as you know -

  • you and I - we've done that one: >> Sean: Was that "The most difficult ... ?" >> DFB: "The most difficult number to compute, question mark"

  • "We set this going four weeks ago nearly now the first few are vanished ..."

  • So what made it so difficult? well you write down Ackermann's function and

  • it very clearly ends up with routines calling themselves recursively in a very

  • very complicated way. Now I think your average sort of engineer would be happy

  • to say that there's this thing called `factorial' which is 5 times 4 times 3 times 2 times 1,

  • or whatever. And you could do that in a loop as well as doing this fancy

  • recursion thing, but a lot of theoreticians admitted to me they saw a

  • Ackermann's function and said: "I could try that out in FORTRAN !". Now what they perhaps

  • didn't realize - but it became famous by 1960 - is:

  • FORTRAN is wonderful, but original FORTRAN did not do user-level recursion

  • You could write a thing called ACK. You could actually get it to call itself

  • in FORTRAN. But you might have been expecting that every time it called

  • itself it would lay out a data area for each recursive call they're called "stack

  • frames" - we know that now. You get lots of stack frames, one on top of another and

  • as you come back through the recursion they're deleted and thrown away and you

  • climb back into your main program. FORTRAN doesn't do that. It sets

  • aside one stack frame. You keep calling yourself recursively it just tramples

  • in its muddy gumboots over all your data area and you end up with total

  • garbage. It no more gives you values of the Ackermann function than fly to the moon!

  • And people said: "I then realized the importance of having user-level

  • recursion, in programming languages, to cope with those really hard problems

  • that fell outside nested for-loops". Algol was famous in that its routines

  • could call themselves recursively and could get the right answer and, for

  • limited low-order values of Ackermann's function - very slow, very slow indeed - but

  • it would come out with the right answer. >> Sean: Is there any need to think of an example of a

  • problem, or program, because Ackermann feels to me like it's the test-bed.

  • You know, when you're testing out a motor-car you might take it on the track

  • and see how fast it can go. But in day-to-day life that car might

  • only get half that speed. What's the real-world kind of equivalent? Is there

  • such a thing? >> DFB: Real world equivalent? >> Sean: ... of something that might need to use

  • recursion ... ? >> DFB: ... of that complexity? Not many things

  • is the answer to that. I mean, yes, it's true that Ackermann, as you know, was David

  • Hilbert's research student. And the challenge was on to find something that

  • was so innately recursive that - remember it was "generally recursive", they called it -

  • as opposed to "primitive recursive". And simple things like factorial and indeed

  • indeed Fibonacci, are primitive recursive. So I think you're right that you really

  • are just making the point that eventually there are things that will

  • kill you. I think the question in the middle is: "Is there something out there -

  • pieces of program you need to write - where non-trivial recursion, in a sense,

  • is needed but not quite to the horrendous degree that Ackermann did. And the

  • answer is: "Yes, compilers is where it hit people". Because although early FORTRAN

  • did not provide user-level recursion, for you and me, nevertheless John Backus and

  • his team implemented it in the middle 1950s I think at IBM.

  • And Backus wrote articles afterwards basically saying: "We didn't know enough

  • about recursion and even though we didn't provide it for the users of our

  • language, boy did we need it in the compiler! And

  • we ended up inventing it in all but name" The syntactic structures of what is

  • legal, in a language, even at the level just of arithmetic statements can be

  • quite recursive. Because you end up with brackets within brackets within brackets

  • all with a multiplier outside. And which order do you do the brackets in? And, you

  • know, how how many levels of bracket nesting can you have. And if you don't

  • get things sorted out correctly then you'll get the wrong answer. But once again

  • the problem could be that your users would come up to you and present you

  • with a problem just designed to test out your compiler, and whether it was robust

  • enough to be able to cope with a high degree of nesting even just in

  • arithmetic statements. So by 1960 in Algol, yeah, the there were enough users, at

  • the user level, who could see that a modicum of recursion, perhaps more

  • complicated than factorial but not quite up to full Ackermann capabilities would be

  • very nice indeed to have within your language. Again referring back to that

  • original video, I had a lot of really interesting mail from various people who

  • said to me: "OK, you said that this is an innately recursive problem and it just

  • had to have general recursion capabilities? Well I .... "

[There's] a lot of interesting stuff both from the point of view of the content but also

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

編程循環與遞歸 - Computerphile (Programming Loops vs Recursion - Computerphile)

  • 2 0
    林宜悉 發佈於 2021 年 01 月 14 日
影片單字