lunes, 18 de mayo de 2015

Bertrand Russell... Article by Maria popova

I think its time to continue the journey into webxistence...

Bertrand Russell On Immortality, Why Religion Exists, And What “The Good Life” Really Means

by Maria Popova“There are forces making for happiness, and forces making for misery. We do not know which will prevail, but to act wisely we must be aware of both.”
Bertrand Russell (May 18, 1872–February 2, 1970) is one of humanity’s most grounding yet elevating thinkers, his writing at once lucid and luminous. There is something almost prophetic in the way he bridges timelessness and timeliness in contemplating ideas urgently relevant to modern life a century earlier — fromhow boredom makes happiness possible to why science is the key to democracy. But nowhere does his genius shine more brilliantly than in What I Believe(public library).
Published in 1925, the book is a kind of catalog of hopes — a counterpoint to Russell’s Icarus, a catalog of fears released the previous year — exploring our place in the universe and our “possibilities in the way of achieving the good life.”
Russell writes in the preface:
In human affairs, we can see that there are forces making for happiness, and forces making for misery. We do not know which will prevail, but to act wisely we must be aware of both.
One of Russell’s most central points deals with our civilizational allergy to uncertainty, which we try to alleviate in ways that don’t serve the human spirit. Nearly a century before astrophysicist Marcelo Gleiser’s magnificent manifesto for mystery in the age of knowledge — and many decades before “wireless” came to mean what it means today, making the metaphor all the more prescient and apt — Russell writes:
It is difficult to imagine anything less interesting or more different from the passionate delights of incomplete discovery. It is like climbing a high mountain and finding nothing at the top except a restaurant where they sell ginger beer, surrounded by fog but equipped with wireless.
Long before modern neuroscience even existed, let alone knew what it now knows about why we have the thoughts we do — the subject of an excellent recent episode of the NPR’s Invisibilia — Russell points to the physical origins of what we often perceive as metaphysical reality:
What we call our “thoughts” seem to depend upon the organization of tracks in the brain in the same sort of way in which journeys depend upon roads and railways. The energy used in thinking seems to have a chemical origin; for instance, a deficiency of iodine will turn a clever man into an idiot. Mental phenomena seem to be bound up with material structure.
Illustration from 'Neurocomic,' a graphic novel about how the brain works. Click image for more.
Nowhere, Russell argues, do our thought-fictions stand in starker contrast with physical reality than in religious mythology — and particularly in our longing for immortality which, despite a universe whose very nature contradicts the possibility, all major religions address with some version of a promise for eternal life. With his characteristic combination of cool lucidity and warm compassion for the human experience, Russell writes:
God and immortality … find no support in science… No doubt people will continue to entertain these beliefs, because they are pleasant, just as it is pleasant to think ourselves virtuous and our enemies wicked. But for my part I cannot see any ground for either.
And yet, noting that the existence or nonexistence of a god cannot be proven for it lies “outside the region of even probable knowledge,” he considers the special case of personal immortality, which “stands on a somewhat different footing” and in which “evidence either way is possible”:
Persons are part of the everyday world with which science is concerned, and the conditions which determine their existence are discoverable. A drop of water is not immortal; it can be resolved into oxygen and hydrogen. If, therefore, a drop of water were to maintain that it had a quality of aqueousness which would survive its dissolution we should be inclined to be skeptical. In like manner we know that the brain is not immortal, and that the organized energy of a living body becomes, as it were, demobilized at death, and therefore not available for collective action. All the evidence goes to show that what we regard as our mental life is bound up with brain structure and organized bodily energy. Therefore it is rational to suppose that mental life ceases when bodily life ceases. The argument is only one of probability, but it is as strong as those upon which most scientific conclusions are based.
A 1573 painting by Portuguese artist, historian, and philosopher Francisco de Holanda, a student of Michelangelo's, from Michael Benson's book 'Cosmigraphics'—a visual history of understanding the universe. Click image for more.
But evidence, Russell points out, has little bearing on what we actually believe. (In the decades since, pioneering psychologist and Nobel laureate Daniel Kahneman has demonstrated that the confidence we have in our beliefs is no measure of their accuracy.) Noting that we simply desire to believe in immortality, Russell writes:
Believers in immortality will object to physiological arguments [against personal immortality] on the ground that soul and body are totally disparate, and that the soul is something quite other than its empirical manifestations through our bodily organs. I believe this to be a metaphysical superstition. Mind and matter alike are for certain purposes convenient terms, but are not ultimate realities. Electrons and protons, like the soul, are logical fictions; each is really a history, a series of events, not a single persistent entity. In the case of the soul, this is obvious from the facts of growth. Whoever considers conception, gestation, and infancy cannot seriously believe that the soul in any indivisible something, perfect and complete throughout this process. It is evident that it grows like the body, and that it derives both from the spermatozoon and from the ovum, so that it cannot be indivisible.
Long before the term “reductionism” would come to dismiss material answers to spiritual questions, Russell offers an elegant disclaimer:
This is not materialism: it is merely the recognition that everything interesting is a matter of organization, not of primal substance.
Art by Roz Chast from her illustrated meditation on aging, illness, and death. Click image for more.
Our obsession with immortality, Russell contends, is rooted in our fear of death — a fear that, as Alan Watts has eloquently argued, is rather misplaced if we are to truly accept our participation in the cosmos. Russell writes:
Fear is the basis of religious dogma, as of so much else in human life. Fear of human beings, individually or collectively, dominates much of our social life, but it is fear of nature that gives rise to religion. The antithesis of mind and matter is … more or less illusory; but there is another antithesis which is more important — that, namely, between things that can be affected by our desires and things that cannot be so affected. The line between the two is neither sharp nor immutable — as science advances, more and more things are brought under human control. Nevertheless there remain things definitely on the other side. Among these are all the large facts of our world, the sort of facts that are dealt with by astronomy. It is only facts on or near the surface of the earth that we can, to some extent, mould to suit our desires. And even on the surface of the earth our powers are very limited. Above all, we cannot prevent death, although we can often delay it.
Religion is an attempt to overcome this antithesis. If the world is controlled by God, and God can be moved by prayer, we acquire a share in omnipotence… Belief in God … serves to humanize the world of nature, and to make men feel that physical forces are really their allies. In like manner immortality removes the terror from death. People who believe that when they die they will inherit eternal bliss may be expected to view death without horror, though, fortunately for medical men, this does not invariably happen. It does, however, soothe men’s fears somewhat even when it cannot allay them wholly.
In a sentiment of chilling prescience in the context of recent religiously-motivated atrocities, Russell adds:
Religion, since it has its source in terror, has dignified certain kinds of fear, and made people think them not disgraceful. In this it has done mankind a great disservice: all fear is bad.
Science, Russell suggests, offers the antidote to such terror — even if its findings are at first frightening as they challenge our existing beliefs, the way Galileo did. He captures this necessary discomfort beautifully:
Even if the open windows of science at first make us shiver after the cosy indoor warmth of traditional humanizing myths, in the end the fresh air brings vigor, and the great spaces have a splendor of their own.
Art from 'You Are Stardust,' a children's book teaching kids about the universe. Click image for more.
But Russell’s most enduring point has to do with our beliefs about the nature of the universe in relation to us. More than eight decades before legendary graphic designer Milton Glaser’s exquisite proclamation — “If you perceive the universe as being a universe of abundance, then it will be. If you think of the universe as one of scarcity, then it will be.” — Russell writes:
Optimism and pessimism, as cosmic philosophies, show the same naïve humanism; the great world, so far as we know it from the philosophy of nature, is neither good nor bad, and is not concerned to make us happy or unhappy. All such philosophies spring from self-importance, and are best corrected by a little astronomy.
He admonishes against confusing “the philosophy of nature,” in which such neutrality is necessary, with “the philosophy of value,” which beckons us to create meaning by conferring human values upon the world:
Nature is only a part of what we can imagine; everything, real or imagined, can be appraised by us, and there is no outside standard to show that our valuation is wrong. We are ourselves the ultimate and irrefutable arbiters of value, and in the world of value Nature is only a part. Thus in this world we are greater than Nature. In the world of values, Nature in itself is neutral, neither good nor bad, deserving of neither admiration nor censure. It is we who create value and our desires which confer value… It is for us to determine the good life, not for Nature — not even for Nature personified as God.
Russell’s definition of that “good life” remains the simplest and most heartening one I’ve ever encountered:
The good life is one inspired by love and guided by knowledge.
Knowledge and love are both indefinitely extensible; therefore, however good a life may be, a better life can be imagined. Neither love without knowledge, nor knowledge without love can produce a good life.
What I Believe is a remarkably prescient and rewarding read in its entirety — Russell goes on to explore the nature of the good life, what salvation means in a secular sense for the individual and for society, the relationship between science and happiness, and more. Complement it with Russell on human nature, the necessary capacity for “fruitful monotony,” and his ten commandments of teaching and learning, then revisit Alan Lightman on why we long for immortality.
Donating = Loving
Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.
You can also become a one-time patron with a single donation in any amount.
Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.
Share on Tumblr

viernes, 10 de junio de 2011

Max Mathews: The First Computer Musician -

La era digital nacio en los 50's.

Max Mathews: The First Computer Musician -

June 8, 2011, 9:00 PM

The First Computer Musician

The Score
In The Score, American composers on creating “classical” music in the 21st century.
Max MathewsRoger LinnMax Mathews in Berkeley, Calif., in 2005.
If the difference between 1911 and 2011 is electricity and computation, then Max Mathews is one of the five most important musicians of the 20th Century. – Miller Puckette
In 1957 a 30-year-old engineer named Max Mathews got an I.B.M. 704 mainframe computer at the Bell Telephone Laboratories in Murray Hill, N. J., to generate 17 seconds of music, then recorded the result for posterity. While not the first person to make sound with a computer, Max was the first one to do so with a replicable combination of hardware and software that allowed the user to specify what tones he wanted to hear. This piece of music, called “The Silver Scale” and composed by a colleague at Bell Labs named Newman Guttman, was never intended to be a masterpiece. It was a proof-of-concept, and it laid the groundwork for a revolutionary advancement in music, the reverberations of which are felt everywhere today.
When Max died in April at the age of 84 he left a world where the idea that computers make sound is noncontroversial; even banal. In 2011, musicians make their recordings using digital audio workstations, and perform with synthesizers, drum machines and laptop computers. As listeners, we tune in to digital broadcasts from satellite radio or the Internet, and as consumers, we download small digital files of music and experience them on portable music players that are, in essence, small computers. Sound recording, developed as a practical invention by Edison in the 1870s, was a technological revolution that forever transformed our relationship to music.
A pioneer who believed that computers were meant to empower humans to make music, not the other way around.
In comparison, the contributions of Max Mathews may seem inevitable. Just as so much of our life has become “digitized,” so it seems that sooner or later, sound would become the domain of computers. But the way in which Max opened up this world of possibilities makes him a singular genius, without whom I, and many people over the last six decades, would have led very different lives.

jueves, 9 de junio de 2011

La ONU exhorta a que Internet sea un derecho universal de fácil acceso para cualquier individuo. | En positivo

Esto es un desarrollo importantísimo para la humanidad. La elevación de la internet a status de derecho humano es una muestra inequívoca de que eestamos ya en el Siglo XXI.

La ONU exhorta a que Internet sea un derecho universal de fácil acceso para cualquier individuo. | En positivo

Internet es un “derecho humano” para la ONU


onu-derechos humanos-web-internet derecho humanoLa ONU declara el acceso a Internet como un derecho humano.
El uso de Internet se está convirtiendo en una herramienta imprescindible para la libertad de expresión. Más que una posibilidad de comunicación se está convirtiendo en una necesidad debido al periodo de globalización que hoy se vive. Por ello, la Asamblea General de las Naciones Unidas ha declarado el acceso a Internet como un derecho humano.

Desde las revueltas en Oriente Medio hasta el movimiento ’15-M’ en Madrid quizá no habrían sido posibles sin la inestimable ayuda de Internet. La capacidad de difusión que otorga esta herramienta se está convirtiendo en una necesidad básica para gran parte de la población.

Entre los argumentos que maneja la ONU, defienden que Internet es una herramienta que favorece el crecimiento y el progreso de la sociedad en su conjunto. Asimismo, consideran que debería ser un derecho universal de fácil acceso para cualquier individuo y exhorta a los gobierno a facilitar su acceso.

“La única y cambiante naturaleza de Internet no sólo permite a los individuos ejercer su derecho de opinión y expresión, sino que también forma parte de sus derechos humanos y promueve el progreso de la sociedad en su conjunto”, indicó el Relator Especial de la ONU, Frank La Rue, en un comunicado de prensa recogido por la CNN.

Según La Rue, los gobiernos “deben esforzarse” para hacer Internet “ampliamente disponible, accesible y costeable para todos”. Asegurar el acceso universal Internet “debe ser una prioridad de todos los estados”.

Por otro lado, la organización se ha mostrado contrariada por las medidas opresoras de algunos gobiernos que violan el acceso a Internet. Desde gobiernos occidentales como Francia con su ley Hadopi hasta países con dictaduras como modelo de poder, aplican hoy en día medidas restrictivas al acceso a Internet.

El gobierno chino ha bloqueado el acceso a páginas como Facebook, Twitter, Youtube y LinkedIn e incluso ha creado su propio buscador que filtra y censura la búsqueda de palabras como: revolución jazmín, democracia, entre muchas otras.

Son muchos los gobiernos que han bloqueado el acceso a Internet. Egipto lo hizo durante las revueltas sociales que terminaron con la dictadura de Hosni Mubarak. Irán bloqueó algunas páginas de activistas que llamaban a una manifestación y muchos otros países han seguido este ejemplo.

La ONU afirma que el acceso a la web debe mantenerse y es especialmente valioso “en momentos políticos clave como elecciones, tiempos de intranquilidad social o aniversarios históricos y políticos”, según recoge la CNN.

De hecho, esta herramienta se considera tan importante que Estados Unidos ha desarrollado tecnologías que le permiten restaurar la conexión a Internet en un determinado país que las hubiera bloqueado, en caso de que deseara hacerlo.

Por último, la ONU señala que Internet, como un medio para ejercer el derecho a la libertad de expresión, sólo puede servir a estos propósitos si los estados asumen su compromiso por desarrollar políticas efectivas para lograr el acceso universal”

Fuente: Press

domingo, 27 de febrero de 2011

Burning Chrome

La importancia de este artículo radica en la explicación que hace de como Google Chrome se ha convertido en poco tiempo en el equivalente a un sistema operativo que permite al usuario hacerlo todo sin salir de el.

Si esto no es Webxistencia,¿qué lo es?

Burning Chrome

Burning Chrome

Jon Evans
8 hours ago
“A good player goes where the puck is. A great player goes where the puck is going to be”—The Great One
Google made a few interesting announcements this week. First, Google Docs Viewer support for a sheaf of new document types, including Excel, Powerpoint, Photoshop and PostScript. Second, Chrome’s new ability to run background apps that run seamlessly and invisibly behind the browser. Third, they released Google Cloud Connect, which lets Windows users sync Office documents to Google Docs. They also announced the Android 3.0 SDK – but despite the ongoing tablet hysteria, in the long run, the first three are more important.
Little by little, iteration by iteration, the Chrome browser is quietly morphing into a full-fledged multitasking operating system in its own right. Oh, sure, technically it’s actually running on another OS, but you increasingly never need to launch anything else. View and edit documents in Google Docs, watch and listen to HTML5 video and audio, communicate via Gmail and its Google Voice plugin, use Google Docs as a file system – and the line between “Chrome OS” and “Chrome on any other OS” suddenly grows very fine.
Google’s long-term strategy seems to be to supplant Microsoft by first building the best browser, then making it easy to move your files to Google Docs … and finally, slowly but inexorably, making Windows and Office irrelevant. Obviously no one will abandon Microsoft products wholesale anytime soon; but as cloud computing grows more ubiquitous, Google steadily iterates feature after feature, and people grow accustomed to working in the browser, then one day, maybe only a couple of years from now, a whole lot of people – and businesses – will begin to think to themselves “Hey, we haven’t actually needed Windows or Office inmonths. Why do we even have them at all?”

miércoles, 23 de febrero de 2011

Race to the Top of What? Obama On Education -

Interesante que guardé esto cuando se publicó en enero 31 y ahora es que lo pongo en mi blog. Lo interesante es que acabo de escuchar al Gobernador de michigan ordenar el cierre del 50% de las escuelas de Detroit para 2013.

Asi que cabe preguntar: ¿Para qué queremos realmente 100,000 nuevos maestros según dice Obama? ¿Para enterrarlos en un vertedero?

Race to the Top of What? Obama On Education -
January 31, 2011, 8:00 PM

Race to the Top of What? Obama On Education

Stanley FishStanley Fish on education, law and society.
On the morning after the State of the Union speechwas delivered, John Hockenberry, co-host of the NPR program “The Takeaway,” read aloud President Obama’s declaration that “we want to prepare 100,000 new teachers in the fields of science, technology, engineering and math.” Hockenberry commented: “You scientists, engineers and techies know who you are; but what about the rest of us?”
What about the rest of us, indeed! Obama had just got through saying, “We want to reward good teachers,” and he went on to make a pitch for new recruits to the teaching profession: “If you want to make a difference in the life of a child — become a teacher.” Not, however, a teacher of English or French or art history. Obama doesn’t say so, but by the logic of his presentation, these disciplines are not when he has in mind when he talks about the “Race to the Top” and calls it “the most meaningful reform of our public schools in a generation.”
Race to the top of what? We get a hint from this statement: “We need to teach our kids that it’s not just the winner of the Super Bowl who deserves to be celebrated, but the winner of the science fair.”
Now it’s clear what is going on here. Obama is developing his major theme: we need innovation to catch up with China and other advanced societies. And it is perfectly reasonable to tie innovation in certain fields to the production of citizens who are technically, mathematically and scientifically skilled. But is that what’s wrong with American education, too few students who acquire the market-oriented skills we need to compete (a favorite Obama word) in the global economy and too few teachers capable of imparting them? Is winning the science fair the goal that defines education? A dozen more M.I.T.s and Caltechs and fewer great-book colleges and we’d be all right?

The Untold Story of How My Dad Helped Invent the First Mac | Co.Design

La verdadera historia aún está por ser escrita. Y será fascinante.
Lo interesante y revelador es el comprender nosotros una realidad muy dura... mas del 50% de lo que sucede en tecnología en realidad no se sabe lo que es ni como ocurre. Y pero aún, de ese menos de 50% que se sabe, mucho en realidad se cree o se acpeta por algo parecido a la fé. Y de ese remanente que es explicable, solo un puñado de gente lo comprende.
En realidad la internet es alquimia... algo que hacemos termina siendo otra cosa que podemos usar aunque no tengamos NPI de como llegó a ser lo que es.
Solo un ejemplo: ¿cuantos ahí afuera en realidad entienden el código HTML que esta detrás de esta página?
Les aseguro algo... yo no soy uno de ellos.

The Untold Story of How My Dad Helped Invent the First Mac | Co.Design

The Untold Story of How My Dad Helped Invent the First Mac

The guiding principles laid out in those early days still form Apple's DNA to this day.
Jef Raskin, my father, (below) helped develop the Macintosh, and I was recently looking at some of his old documents and came across his February 16, 1981 memo detailing the genesis of the Macintosh.
It was written in reaction to Steve Jobs taking over managing hardware development. Reading through it, I was struck by a number of the core principals Apple now holds that were set in play three years before the Macintosh was released. Much of this is particularly important in understanding Apple's culture and why we have the walled-garden experience of the iPhone, iPad, and the App Store.

Even better, I found some sometimes snarky comments Jef had made to the memo as part of the Stanford Computer History project. The annotated memo follows my commentary.

Apple Learns to Own the Entire Experience

Reading the memo, we see that Apple was struggling with an explosion of fragmentation with the Apple II:
It is impossible to write a program on the Apple II or III that will draw a high-resolution circle since the aspect ratio and linearity of the customer's TV or monitor is unknown.
This is the exact problem that Google Android now faces. The revolutionary idea back in 1981, even to Apple, was to throw away the Apple II's corner-stone expandability in exchange for owning the experience from top to bottom:
The secret of mass marketing of software is having a very large and extremely uniform hardware/software base.
To combat fragmentation, for the Macintosh:
There were to be no peripheral slots so that customers never had to see the inside of the machine (although external ports would be provided); there was a fixed memory size so that all applications would run on all Macintoshes; the screen, keyboard, and mass storage device (and, we hoped, a printer) were to be built in so that the customer got a truly complete system, and so that we could control the appearance of characters and graphics.
We also see Jef articulating and forming Apple's nascent core principle of innovation being prioritized over backwards compatibility.
The Apple II/III system is already lost. We cannot go back and simplify, we can only go forward.
This became a key differentiator to Microsoft's no-matter-what policty of maintaining backwards compatibility. Apple willingness to ditch the old for innovation, left it nimble and able to overcome the innovator's dilemma.

In the Future, Robots Will Surf Their Own Internet | Fast Company

Interesante. hoy la internet detituye gobiernos. Mañana quizás, nos destituya a nosotros si no estamos pendientes.

In the Future, Robots Will Surf Their Own Internet | Fast Company

In the Future, Robots Will Surf Their Own Internet


RoboEarth diagram
If robots are to become our overlords, they will need their own Internet to communicate with each other. RoboEarth, a just-launched robot information sharing network, gets them that much closer to world domination.
The EU-funded RoboEarth project is bringing together European scientists to build a network and database repository for robots to share information about the world. They will, if all goes as planned, use the network to store and retrieve information about objects, locations (including maps), and instructions about completing activities. Robots will be both the contributors and the editors of the repository.
The point, according to the RoboEarth project, is to allow robots to learn from past experiences and share them with their peers. The site explains:
Rapid development of sensor and networking technology is now enabling researchers to collect vast amounts of sensor data, and new data-mining tools are being developed to extract meaningful patterns. Researchers are already using networked "feed forward" approaches to make significant advances in machine-based learning systems. Thus far, however, these smart feed forward systems have been operating in isolation from each other. If they are decommissioned, all that learning is lost.
With RoboEarth, one robot's learning experiences are never lost--the data is passed on for other robots to mine. As RedOrbit explains, that means one robot's experiences with, say, setting a dining room table could be passed on to others, so the butler robot of the future might know how to prepare for dinner guests without any prior programming.
The 35 researchers working on the project expect to be finished within four years. After that, the age of intelligent robots can begin.
Follow Fast Company on Twitter. Ariel Schwartz can be reached by email.