1. “The Computer Delusion,” by Todd Oppenheimer: remarks below will cite this article
2. This seems to describe a considerable number of blog posts, as well as student essays:
In Endangered Minds, Jane Healy wrote of an English teacher who could readily tell which of her students’ essays were conceived on a computer. “They don’t link ideas,” the teacher says. “They just write one thing, and then they write another one, and they don’t seem to see or develop the relationships between them.” The problem, Healy argued, is that the pizzazz of computerized schoolwork may hide these analytical gaps, which “won’t become apparent until [the student] can’t organize herself around a homework assignment or a job that requires initiative. More commonplace activities, such as figuring out how to nail two boards together, organizing a game … may actually form a better basis for real-world intelligence.”
This passage is a lovely example of what the article gets right and also gets very, very wrong. A major problem with the Internet is that our ability to argue, which involves paying close attention to varied lines of logic and teasing out the full implications of each, seems to be irrelevant for the most part. Arguments from authority, ad hominem attacks and unchecked sources make up the “height” of argumentation here. On any number of blogs one can see that commentators read the title of the relevant post only and put a generic or tangential comment to the subject involved down. Actually engaging the writer as someone who has an opinion worth considering seems to be far beyond most people’s capacities.
What is frightening is how most people I encounter nowadays can’t pick up on what is a bad argument and what isn’t – most people I meet can look at the entries I decry and say “so what, looks good to me.” This is anecdotal, obviously, but I can’t help but think the Internet is contributing directly to this. To even use the term “blogging about one’s day” for Myspace blog entries written entirely in chat-acceptable spelling and complete with comments that are at best inside jokes between friends is to risk over-romanticizing what could be happening here.
We could be creating, out of a literate society, a subliterate culture that all of us must strive to placate.
Now when all is said and done, I’m very hopeful about the Internet, and happy about blogs. I wish people would blog about things that I felt comfortable responding to more, that’s all, in a way that explored the strengths and weaknesses of its own logic.
3. But I did say that passage was wrong about something major, and I’d better expand on that.
The really interesting question is: “Why Don’t Computers Educate Better?” The article cites a number of reasons, but there’s a giant philosophical obstacle that needs to hurdled.
Namely, all learning nowadays is based on method. We give students methods for solving math problems; evaluating studies; judging fact from opinion – heck, I’m guilty of this, I give a method for reading poetry and philosophy.
Method is the core of Enlightenment. The assumption is that if you can execute the method, you have actual understanding of the issues involved. I don’t think I need to get into exactly how laughable this idea is, for it suffices to say that there is a corollary to this notion: even if you don’t understand, it doesn’t matter, you’re working for the progress of humanity implicitly.
It should be noted that the way I prefer to characterize learning truly is “discovery.” Means are arranged as subordinate to ends and while ends are debatable, they are to appreciated, not dismissed outright. Enlightenment trades in “ends” for security – a real debate about anything would bring forth some very dangerous ideas, always – and puts forth a “means” we can apparently all agree on.
In any case, computers execute a method, we learn via method. So why aren’t we getting more educated via computing?
The answer is we are getting more educated in a useful way – it is just that enough of the old standards exist that while we can’t defend them explicitly, they dampen the enthusiasm for the modernizing project.
The article commits ferocious blunders by constantly citing tactile, sensation-oriented learning as the alternative to computing, and in assuming that businesses teaching specifically what they want in employees is an unacceptable thing.
In the short term, the article’s argument works: for this generation, the failure of technology in key areas means they should have general skills. Furthermore, since employers demand specialized skill sets that are not identical to each other, general skills are preferable.
But guess what? “General skills” is precisely the issue generating these problems. As the technology gets better, certain skills get lost entirely, and it isn’t clear what value those skills will have unless the Apocalypse forces us to remake the world (I have long division clearly in mind here). Adding to this is the deeper problem that “general skills” being needed to please a multitude of employers still keeps education as nothing more than a shill for business.
A stronger argument is needed for education in-and-of itself in order to see why computing may be faulty. The practical concerns can be addressed quite easily: I can create a curriculum where a kid learns how to work in retail at a number of positions, and also learns how to file papers, do taxes, keep numbers straight, etc. But just because a robot/computer can’t do those jobs now doesn’t mean they won’t do those jobs. Once a sense of what “good” we want is established, we can then move onto what might be the deeper concern of the article, which is our sense of craft apart from the tools we use.