I expect that most of my fellow classmates here at MIT have, like me, been called upon by relatives - particularly parents and grandparents - to explain, fix or set up computers. We get frustrated phone calls about error messages, and during our trips home we often teach new computer skills and fix problems.
In most cases, us "Digital Natives" were not taught how to use computers. Some of us may have had Apple II computers in elementary school (used for playing Oregon Trail or other games), and some (like me) had typing lessons - but we are definitely self-taught. It it that learning how to use computers is like learning a language - it takes effort and structure as an adult, and happens much more organically as a child?
Perhaps - this article by Sugata Mitra is incredible. So often adults assume that complex skills, like using a computer, require regimented, segmented, carefully planned exposition in order to be absorbed by young minds. (Just look at how we teach math in this country! Or even reading for that matter! If you want proof that people can - and always do - learn to read with no instruction at all, look here.) Mitra's article shows just the opposite. Children (poor slum children, lest one suggest that only the privileged can master these things) given access to a computer, which appeared with no fanfare, instruction, or even the childrens' native language, figured out how to use it.
Three quotes from Mitra's article relating to Prensky's Native/Immigrant Digital Divide (apologies for the length, but it's interesting, you see):
"It was a social observation rather than a scientific one. Any parent who had given his child a computer would invariably remark to me about it. I could hardly ever find an exception. Within a very short period of time, the parent would be claiming that the child was a genius with a computer. When I poked a little further, I invariably found that the child was doing things with the computer that the parent didn't understand."
"Well, I tried another experiment. I went to a middle-class school and chose some ninth graders, two girls and two boys. I called their physics teacher in and asked him, "What are you going to teach these children next year at this time?" He mentioned viscosity. I asked him to write down five possible exam questions on the subject. I then took the four children and said, "Look here guys. I have a little problem for you." They read the questions and said they didn't understand them, it was Greek to them. So I said, "Here's a terminal. I'll give you two hours to find the answers."
Then I did my usual thing: I closed the door and went off somewhere else.
They answered all five questions in two hours. The physics teacher checked the answers, and they were correct. That, of itself, doesn't mean much. But I said to him, "Talk to the children and find out if they really learned something about this subject." So he spent half an hour talking to them. He came out and said, "They don't know everything about this subject or everything I would teach them. But they do know one hell of a lot about it. And they know a couple of things about it I didn't know.""
"I'm not even going to suggest that we use this [technique] for adults. The only reaction we got from adults was, "What on earth is this for? Why is there no one here to teach us something? How are we ever going to use this?" I contend that by the time we are 16, we are taught to want teachers, taught that we cannot learn anything without teachers."So. Here's the real question - is there a *fundamental* difference between how adults and children learn to use computers? Or, as Mitra suggests, are adults taught that they need teachers?
I want to mention some of the characteristics Prensky identifies about Digital Natives, because although I think he brings up some really good points, I disagree with him on several counts.
Here's a quote from his essay describing what Digital Natives are like:
"Digital Natives are used to receiving information really fast. They like to parallel process and multi-task. They prefer their graphics before their text rather than the opposite. They prefer random access (like hypertext). They function best when networked. They thrive on instant gratification and frequent rewards. They prefer games to “serious” work."
(Importantly to what I am going to say, he also mentions: 1) that Digital Immigrants have a tendency to assume that Learning Is Serious Business and shouldn't be confused with playtime and 2) that Digital Natives, widely assumed to have lost the ability to memorize anything because of the internet, in fact memorize lots of stuff - just not academics.)
As a Digital Native, here's what I think: Yes, I'm used to receiving information fast. I get irritated by slow internet connections. But take the internet away altogether - say, when hiking - and I don't go crazy. I don't think young people are dependent on the internet in order to think - which is an attitude I often hear, as if our brains have been outsourced. Why WOULDN'T we use it? It's a miracle tool! I think that teenagers often say things like "I can't live without my cell phone", when what they really mean is "I can't live without my friends", and that's certainly not unusual for a teenager of any era!
Do I like to parallel process and multi-task? Eh, sometimes. I don't really see what relevance this has to the debate. My mom multi-tasks just as much as I do, and she's squarely in the Digital Immigrant category. Do I prefer graphics before text? Not necessarily. While I do like pictures, actually, I find web pages with distracting images irritating. When I'm looking for information, I don't want extraneous pictures.
(In general I think Prensky's points about multi-tasking and graphics are somewhat prey to the stereotype of sugarhigh 10 year olds who can't be sedated except by jittery videogames and more sugar. I'm not sure these children actually exist. If there is actually a 10 year old who can't sit still for something he or she is interested in, I have yet to meet him or her.)
Do I prefer random access (like hypertext)? Yes! Absolutely. I think this is one of the most important points Prensky makes. If you have hypertext, you're not limited to learning things in a linear fashion. You can build a network of knowledge, at your own pace. If you learn differently from other people (which, by the way, is true of, well, everybody), you have the freedom to take your own winding path through information.
Now, do I thrive on instant gratification? No. Unless you count having a web page load as "gratification" - which I don't. This idea implies that Digital Natives have no goals that they are willing to pursue for any length of time, and that they crave less meaningful, more instant rewards. I think what IS true is that many young people are bored by textbook classroom teaching, which presents information with no excitement or joy. If pleasure in learning is what Prensky means by "gratification", then I think he's right. And then, I would ask, why should anybody put up with learning that is not fun, when it has the potential to be fun?
Prensky says that Digital Natives thrive on games as opposed to serious work. Well, good for them. Good for us. If learning and living can be more fun than it currently is in schools - and I can certainly attest to the fact that secondary education is NOT FUN - why on earth would we choose not to have fun? Some sort of puritanical guilt? Prensky and Mitra and Dodd (the author of the page on reading) all give examples of people learning without forced instruction, and having a good time doing it. Prensky points out that young people are pefectly capable of memorizing worlds of information about Pokemon, but seem incapable of memorizing the capitals of the world. The internet hasn't outsourced our brains or killed our curiosity - in fact, my personal experience would lead me to believe that it's given me more food for thought than any other resource I have. Thank goodness that young people today are realizing that we needn't divide our lives in to Serious and Fun.