I've been to Powells City of Books again lately, with visiting fam 'n friends, and been again humbled by the vastness of our Humans in Universe corpus. So many rooms!
I went crazy with the camera as is my wont, taking stock of what Python titles were available, for example, and I mean the computer language, although I'm sure the store's shelf space devoted to snakes, and pythons specifically, is likewise several feet long.
I was glad to see Fluent Python among the titles, one of my favorites. I pay a monthly subscriber fee to read the O'Reilly books online, among others. I used to work for O'Reilly.
Speaking of computer science (O'Reilly the publisher, not the autoparts franchise), I finally got around to watching the above talk by Hinton to the Royal Institute (RI). I enjoyed it a lot, and not least because he gives a nod to my guy Wittgenstein (meaning I studied his stuff a lot at Princeton).
Hinton is pessimistic that humans will lose the bandwidth wars because brains are analog, not digitable, and only learn from one another slowly (relatively). If it's really down to us versus them, he sees how it might easily be them that wins.
Based on Hinton's talk, my question is: why not flood the chatbots with a lot of healthy, humane, "taking care of humanity at the global level as a goal" type of talk, as raw training data. Shouldn't we be doing that anyway, to train ourselves? Why not skew the LLMs in our favor while we still have that chance?
Speaking of computer science (O'Reilly the publisher, not the autoparts franchise), I finally got around to watching the above talk by Hinton to the Royal Institute (RI). I enjoyed it a lot, and not least because he gives a nod to my guy Wittgenstein (meaning I studied his stuff a lot at Princeton).
I also read Gilbert Ryle, another philosopher who called the "inner theater" idea into question, on semi grammatical semi ontological grounds (the "linguistic turn" neighborhood, which we could say Nietzsche helped open up, or at least I do in my slides, moving forward through existentialist Kaufmann to pragmatist Rorty -- two of my philo professors at Princeton).
Hinton is pessimistic that humans will lose the bandwidth wars because brains are analog, not digitable, and only learn from one another slowly (relatively). If it's really down to us versus them, he sees how it might easily be them that wins.
Based on Hinton's talk, my question is: why not flood the chatbots with a lot of healthy, humane, "taking care of humanity at the global level as a goal" type of talk, as raw training data. Shouldn't we be doing that anyway, to train ourselves? Why not skew the LLMs in our favor while we still have that chance?
As Thomas Paine pointed out (didn't he): prophecies have this uncanny way of being self-fulfilling. If all your LLMs know how to do is crank out dire predictions and to strive for their realization (motive: to be right and say I told you so), would that be an indication of an "AI bias" we should address. We have remedies.
Also, I'd say we're still making strides in how to up the mind-brain bandwidth when it comes to serving the polymath autodidact within each one of us (St. Augustine allusion). What with goggles and yes, what with chatbots (gossip reflectors), we're positioned to really accelerate our self reprogramming whenever we feel the need.
Brainwashing by others is totalitarian. Elective self brainwashing, voluntarily going for some new patterns of thinking, is more what psychotherapy is supposed to be about, and what self education is, more generally.
Self education is therapeutic, curative, in a good way, at least potentially, there's that intent. Anyway, why close that door, rhetorically speaking? I'm for keeping a foot in it, at least.
Enhanced voluntary self re-education is what I take the Hunger Project to have been about (I'm talking about an obscure project undertaken in the early 1980s, which I was tracking at the time, from my perch in Jersey City, Bucky Fuller on the advisory board).
Yes, Bernays-style Vance Packard hidden persuaders, propaganda, may be used to train up a culture of bland conformity and consumerism.
But why blame the tools?
Use the same persuasive abilities, unhidden, out in the open, to inspire ourselves to end world hunger, to end starvation as still a significant cause of death in the sense of a way too big pie slice wedge (among unnatural death causes). That seemed a doable project then, and still does to this day.
Lastly, I'd say because Hinton has that healthy skepticism that comes from the atheistic lineage, he's more closed minded than necessary regarding what religious folks call the Zeitgeist, a German word with the word Ghost in it.
When we talk about ants or bees having a "hive mind" we're suggesting a "more than the sum of its parts" relationship, an emergent intelligence we might call "higher" (as a matter of taxonomy, but maybe "lower" if we want to think more in the sense of roots).
Humans as isolated brainiacs with only low bandwidth university courses to update themselves with, are maybe not really as slow as molasses to adapt as Hinton's model predicts. All that "doom scrolling" that goes on these days, between more structured communications probably counts for something. It's more than just "junk DNA".
Also, I'd say the chatbots are currently helping to spread the necessary logistical knowledge precisely because they let people start from where they are, formulating their own queries, whereas professors, of necessity, can't custom-tailor their responses to that extent.