10 MINUTES WITH RAYMOND GEUSS ON NIHILISM

in conversation with Raymond Geuss

Is nihilism the most important philosophical problem of our present? Philosopher Raymond Geuss talks to four by three about our misconception of nihilism, outlining three ways of questioning it, while asking whether nihilism is a philosophical or a historical problem and whether we are truly nihilists or might simply be confused.

Raymond Geuss is Emeritus Professor in the Faculty of Philosophy at the University of Cambridge and works in the general areas of political philosophy and the history of Continental Philosophy. His most recent publications include, but are not limited to, Politics and the Imagination (2010) and A World Without Why (2014).

12 FRAGMENTS ON NIHILISM

12 FRAGMENTS ON NIHILISM

Eugene Thacker


Are you a nihilist and should you be one? Philosopher Eugene Thacker turns to Friedrich Nietzsche to break down nihilism into fragments of insights, questions, possible contradictions and thought provoking ruminations, while asking whether nihilism can fulfill itself or always ends up undermining itself?


1. What follows came out of an event held at The New School in the spring of 2015. It was an event on nihilism (strange as that sounds). I admit I had sort of gotten roped into doing it. I blame it on the organizers. An espresso, some good conversation, a few laughs, and there I was. Initially they tell me they’re planning an event about nihilism in relation to politics and the Middle East. I tell them I don’t really have anything to say about the Middle East – or for that matter, about politics – and about nihilism, isn’t the best policy to say nothing? But they say I won’t have to prepare anything, I can just show up, and it’s conveniently after I teach, just a block away, and there’s dinner afterwards…How can I say no?

How can I say no…

 

2. Though Nietzsche’s late notebooks contain many insightful comments on nihilism, one of my favorite quotes of his comes from his early essay “On Truth and Lies in an Extra-Moral Sense.” I know this essay is, in many ways, over-wrought and over-taught. But I never tire of its opening passage, which reads:

In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the haughtiest and most mendacious minute of “world history” – yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die.

One might invent such a fable and still not have illustrated sufficiently how wretched, how shadowy and flighty, how aimless and arbitrary, the human intellect appears in nature. There have been eternities when it did not exist; and when it is done for again, nothing will have happened. For this intellect has no further mission that would lead beyond human life. It is human, rather, and only its owner and producer gives it such importance, as if the world pivoted around it.

The passage evokes a kind of impersonal awe, a cold rationalism, a null-state. In the late 1940s, the Japanese philosopher Keiji Nishitani would summarize Nietzsche’s fable in different terms. “The anthropomorphic view of the world,” he writes, “according to which the intention or will of someone lies behind events in the external world, has been totally refuted by science. Nietzsche wanted to erase the last vestiges of this anthropomorphism by applying the critique to the inner world as well.”

Both Nietzsche and Nishitani point to the horizon of nihilism – the granularity of the human.

 

3. At the core of nihilism for Nietzsche is a two-fold movement: that a culture’s highest values devalue themselves, and that there is nothing to replace them. And so an abyss opens up. God is dead, leaving a structural vacuum, an empty throne, an empty tomb, adrift in empty space.

But we should also remember that, when Zarathustra comes down from the mountain to make his proclamation, no one hears him. They think he’s just the opening band. They’re all waiting for the tight-rope walker’s performance, which is, of course, way more interesting. Is nihilism melodrama or is it slapstick? Or something in-between, a tragic-comedy?

 

4. I’ve been emailing with a colleague about various things, including our shared interest in the concepts of refusal, renunciation, and resignation. I mention I’m finishing a book called Infinite Resignation. He replies that there is surprisingly little on resignation as a philosophical concept. The only thing he finds is a book evocatively titled The Art of Resignation – only to find that it’s a self-help book about how to quit your job.

I laugh, but secretly wonder if I should read it.

 

5. We do not live – we are lived. What would a philosophy have to be in order to begin from this, rather than arriving at it?

 

6. “Are you a nihilist?”

“Not as much as I should be.”

 

7. We do Nietzsche a dis-service if we credit him for the death of God. He just happened to be at the scene of the crime, and found the corpse. Actually, it wasn’t even murder – it was a suicide. But how does God commit suicide?

 

8. By a process I do not understand, scientists estimate that the planet is capable of sustaining a population of around 1.2 billion – though the current population is upwards of 7 billion. Bleakness on this scale is difficult to believe, even for a nihilist.

 

9. I find Nietzsche’s notebooks from the 1880s to be a fascinating space of experimentation concerning the problem of nihilism. The upshot of his many notes is that the way beyond nihilism is through nihilism.

But along the way he leaps and falls, skips and stumbles. He is by turns analytical and careless; he uses argumentation and then bad jokes; he poses questions without answers and problems without solutions; and he creates typologies, an entire bestiary of negation: radical nihilism, perfect nihilism, complete or incomplete nihilism, active or passive nihilism, Romantic nihilism, European nihilism, and so on…

Nietzsche seems so acutely aware of the fecundity of nihilism.

10. It’s difficult be a nihilist all the way – Eventually nihilism must, by definition, undermine itself. Or fulfill itself.

 

11. Around 1885 Nietzsche writes in his notebook: “The opposition is dawning between the world we revere and the world which we live – which we are. It remains for us to abolish either our reverence or ourselves.”

 

12. If we are indeed living in the “anthropocene,” it would seem sensible to concoct forms of discrimination that are adequate to it. Perhaps we should cast off all forms of racism, sexism, classism, nationalism, and the like, in favor of a new kind of discrimination – that of a species-ism. A disgust of the species, which we ourselves have been clever enough to think. A species-specific loathing that might also be the pinnacle of the species. I spite, therefore I am. But is this still too helpful, put forth with too much good conscience?

The weariness of faith. The incredulity of facts.

 

Eugene Thacker is Professor at The New School in New York City. He is the author of several books, including In The Dust Of This Planet (Zero Books, 2011) and Cosmic Pessimism (Univocal, 2015).

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network.

Adam Ferriss

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

Adam Ferriss

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

This early artificial neural network, at the Cornell Aeronautical Laboratory in Buffalo, New York, circa 1960, processed inputs from light sensors.
Ferriss was inspired to run Cornell’s artificial neural network through Deep Dream, producing the images above and below.

Adam Ferriss

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

Adam Ferriss

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

Choosing the Proper Tool for the Task

Assessing Your Encryption Options

So, you’ve decided to encrypt your communications. Great! But which tools are the best? There are several options available, and your comrade’s favorite may not be the best for you. Each option has pros and cons, some of which may be deal breakers—or selling points!—for you or your intended recipient. How, then, do you decide which tools and services will make sure your secrets stay between you and the person you’re sharing them with, at least while they’re in transit?

Keep in mind that you don’t necessarily need the same tool for every situation; you can choose the right one for each circumstance. There are many variables that could affect what constitutes the “correct” tool for each situation, and this guide can’t possibly cover all of them. But knowing a little more about what options are available, and how they work, will help you make better-informed decisions.

Crypto options 1

Signal

Pros: Signal is free, open source, easy to use, and features a desktop app, password protection for Android, secure group messages. It’s also maintained by a politically-conscious nonprofit organization, and offers: original implementation of an encryption protocol used by several other tools,1ephemeral (disappearing) messages, control over notification content, sent/read receipts—plus it can encrypt calls and offers a call-and-response two-word authentication phrase so you can verify your call isn’t being tampered with.

Cons: Signal offers no password protection for iPhone, and being maintained by a small team means fixes are sometimes on a slow timeline. Your Signal user ID is your phone number, you may have to talk your friends into using the app, and it sometimes suffers from spotty message delivery.

Signal certainly has its problems, but using it won’t make you LESS secure. It’s worth noting that sometimes Signal messages never reach their endpoint. This glitch has become increasingly rare, but Signal may still not be the best tool for interpersonal relationship communications when emotions are heightened!2 One of Signal’s primary problems is failure to recognize when a message’s recipient is no longer using Signal. This can result in misunderstandings ranging from hilarious to relationship-ending. Additionally, Signal for Desktop is a Chrome plugin; for some, this is a selling point, for others, a deal breaker. Signal for Mac doesn’t offer encryption at rest,3 which means unless you’ve turned it on as a default for your computer, your stored saved data isn’t encrypted. It’s also important to know that while Signal does offer self-destructing messages, the timer is shared, meaning that your contact can shut off the timer entirely and the messages YOU send will cease to disappear.

Wickr

Pros: Wickr offers free, ephemeral messaging that is password protected. Your user ID is not dependent on your phone number or other personally identifying info. Wickr is mostly reliable and easy to use—it just works.

Cons: Wickr is not open source, and the company’s profit model (motive) is unclear. There’s also no way to turn off disappearing messages.

Wickr is sometimes called “Snapchat for adults.” It’s an ephemeral messaging app which claims to encrypt your photos and messages from endpoint to endpoint, and stores everything behind a password. It probably actually does exactly what it says it does, and is regularly audited, but Wickr’s primary selling point is that your user login is independent from your cell phone number. You can log in from any device, including a disposable phone, and still have access to your Wickr contacts, making communication fairly easy. The primary concern with using Wickr is that it’s a free app, and we don’t really know what those who maintain it gain from doing so, and it should absolutely be used with that in mind. Additionally, it is worth keeping in mind that Wickr is suboptimal for communications you actually need to keep, as there is no option to turn off ephemeral messaging, and the timer only goes up to six days.

Threema

Pros: Threema is PIN-protected, offers decent usability, allows file transfers, and your user ID is not tied to your phone number.

Cons: Threema isn’t free, isn’t open source, doesn’t allow ephemeral messaging, and ONLY allows a 4-digit PIN.

Threema’s primary selling point is that it’s used by some knowledgeable people. Like Wickr, Threema is not open source but is regularly audited, and likely does exactly what it promises to do. Also like Wickr, the fact that your user ID is not tied to your phone number is a massive privacy benefit. If lack of ephemerality isn’t a problem for you (or if Wickr’s ephemerality IS a problem for you), Threema pretty much just works. It’s not free, but at $2.99 for download, it’s not exactly prohibitively expensive for most users. With a little effort, Threema also makes it possible for Android users to pay for their app “anonymously” (using either Bitcoin or Visa gift cards) and directly download it, rather than forcing people to go through the Google Play Store.

WhatsApp

Pros: Everyone uses it, it uses Signal’s encryption protocol, it’s super straightforward to use, it has a desktop app, and it also encrypts calls.

Cons: Owned by Facebook, WhatsApp is not open source, has no password protection and no ephemeral messaging option, is a bit of a forensic nightmare, and its key change notifications are opt-in rather than default.

The primary use case for WhatsApp is to keep the content of your communications with your cousin who doesn’t care about security out of the NSA’s dragnet. The encryption WhatsApp uses is good, but it’s otherwise a pretty unremarkable app with regards to security features. It’s extremely easy to use, is widely used by people who don’t even care about privacy, and it actually provides a little cover due to that fact.

The biggest problem with WhatsApp appears to be that it doesn’t necessarily delete data, but rather deletes only the record of that data, making forensic recovery of your conversations possible if your device is taken from you. That said, as long as you remain in control of your device, WhatsApp can be an excellent way to keep your communications private while not using obvious “security tools.”

Finally, while rumors of a “WhatsApp backdoor” have been greatly exaggerated, if WhatsApp DOES seem like the correct option for you, it is definitely a best practice to enable the feature which notifies you when a contact’s key has changed.

Facebook Secret Messages

Pros: This app is widely used, relies on Signal’s encryption protocol, offers ephemeral messaging, and is mostly easy to use.

Cons: You need to have a Facebook account to use it, it has no desktop availability, it’s kind of hard to figure out how to start a conversation, there’s no password protection, and your username is your “Real Name” as defined by Facebook standards.

Facebook finally rolled out “Secret Messages” for the Facebook Messenger app. While the Secret Messages are actually pretty easy to use once you’ve gotten them started, starting a Secret Message can be a pain in the ass. The process is not terribly intuitive, and people may forget to do it entirely as it’s not Facebook Messenger’s default status. Like WhatsApp, there’s no password protection option, but Facebook Secret Messages does offer the option for ephemerality. Facebook Secret Messages also shares the whole “not really a security tool” thing with WhatsApp, meaning that it’s fairly innocuous and can fly under the radar if you’re living somewhere people are being targeted for using secure communication tools.

There are certainly other tools out there in addition to those discussed above, and use of nearly any encryption is preferable to sending plaintext messages. The most important things you can do are choose a solution (or series of solutions) which works well for you and your contacts, and employ good security practices in addition to using encrypted communications.

There is no one correct way to do security. Even flawed security is better than none at all, so long as you have a working understanding of what those flaws are and how they can hurt you.

— By Elle Armageddon

Burner Phone Best Practices

A User’s Guide

A burner phone is a single-use phone, unattached to your identity, which can theoretically be used to communicate anonymously in situations where communications may be monitored. Whether or not using a burner phone is itself a “best practice” is up for debate, but if you’ve made the choice to use one, there are several things you should keep in mind.

Burner phones are not the same as disposable phones.

A burner phone is, as mentioned above, a single-use phone procured specifically for anonymous communications. It is considered a means of clandestine communication, and its efficacy is predicated on having flawless security practices. A disposable phone is one you purchase and use normally with the understanding that it may be lost or broken.

Burner phones should only ever talk to other burner phones.

Using a burner phone to talk to someone’s everyday phone leaves a trail between you and your contact. For the safety of everyone within your communication circle, burner phones should only be used to contact other burner phones, so your relationships will not compromise your security. There are a number of ways to arrange this, but the best is probably to memorize your own number and share it in person with whoever you’re hoping to communicate with. Agree in advance on an innocuous text they will send you, so that when you power your phone on you can identify them based on the message they’ve sent and nothing else. In situations where you are meeting people in a large crowd, it is probably OK to complete this process with your phone turned on, as well. In either case, it is unnecessary to reply to the initiation message unless you have important information to impart. Remember too that you should keep your contacts and your communications as sparse as possible, in order to minimize potential risks to your security.

Never turn your burner on at home.

Since cell phones both log and transmit location data, you should never turn on a burner phone somewhere you can be linked to. This obviously covers your home, but should also extend to your place of work, your school, your gym, and anywhere else you frequently visit.

Never turn your burner on in proximity to your main phone.

As explained above, phones are basically tracking devices with additional cool functions and features. Because of this, you should never turn on a burner in proximity to your “real” phone. Having a data trail placing your ostensibly anonymous burner in the same place at the same time as your personally-identifying phone is an excellent way to get identified. This also means that unless you’re in a large crowd, you shouldn’t power your burner phone on in proximity to your contacts’ powered-up burners.

Given that the purpose of using a burner phone is to preserve your anonymity and the anonymity and the people around you, identifying yourself or your contacts by name undermines that goal. Don’t use anyone’s legal name when communicating via burner, and don’t use pseudonyms that you have used elsewhere either. If you must use identifiers, they should be unique, established in advance, and not reused.

Consider using an innocuous passphrase to communicate, rather than using names at all. Think “hey, do you want to get brunch Tuesday?” rather than “hey, this is Secret Squirrel.” This also allows for call-and-response as authentication. For example, you’ll know the contact you’re intending to reach is the correct contact if they respond to your brunch invitation with, “sure, let me check my calendar and get back to you.” Additionally, this authentication practice allows for the use of a duress code, “I can’t make it to brunch, I’ve got a yoga class conflict,” which can be used if the person you’re trying to coordinate with has run into trouble.

Beware of IMSI catchers.

One reason you want to keep your authentication and duress phrases as innocuous as possible is because law enforcement agencies around the world are increasingly using IMSI catchers, also known as “Stingrays” or “Cell Site Simulators” to capture text messages and phone calls within their range. These devices pretend to be cell towers, intercept and log your communications, and then pass them on to real cell towers so your intended contacts also receive them. Because of this, you probably don’t want to use your burner to text things like, “Hey are you at the protest?” or “Yo, did you bring the Molotovs?”

Under normal circumstances, the use of encrypted messengers such as Signal can circumvent the use of Stingrays fairly effectively, but as burner phones do not typically have the capability for encrypted messaging (unless you’re buying burner smartphones), it is necessary to be careful about what you’re saying.

Burner phones are single-use.

Burner phones are meant to be used once, and then considered “burned.” There are a lot of reasons for this, but the primary reason is that you don’t want your clandestine actions linked. If the same “burner” phone starts showing up at the same events, people investigating those events have a broader set of data to build profiles from. What this means is, if what you’re doing really does require a burner phone, then what you’re doing requires a fresh, clean burner every single time. Don’t let sloppy execution of security measures negate all your efforts.

Procure your burner phone carefully.

You want your burner to be untraceable. That means you should pay for it in cash; don’t use your debit card. Ask yourself: are there surveillance cameras in or around the place you are buying it? Don’t bring your personal phone to the location where you buy your burner. Consider walking or biking to the place you’re purchasing your burner; covering easily-identifiable features with clothing or makeup; and not purchasing a burner at a location you frequent regularly enough that the staff recognize you.

Never assume burner phones are “safe” or “secure.”

For burner phones to preserve your privacy, everyone involved in the communication circle has to maintain good security culture. Safe use of burners demands proper precautions and good hygiene from everyone in the network: a failure by one person can compromise everyone. Consequently, it is important both to make sure everyone you’re communicating with is on the same page regarding the safe and proper use of burner phones, and also to assume that someone is likely to be careless. This is another good reason to be careful with your communications even while using burner phones. Always take responsibility for your own safety, and don’t hesitate to erase and ditch your burner when necessary.

— By Elle Armageddon

Why I Choose to Live in Wayne National Forest

TO THE POINT

Our current system is like an abandoned parking lot. Asphalt was laid, killing life and turning everything into a homogenous blackness, a dead sameness. The levers of maintaining this have broken down. No one is coming to touch up the asphalt. In abandoned parking lots, cracks form and life grows from the cracks.

All these riots, environmental catastrophes, food crises, occupations of land by protestors, and various breakdowns in daily life are cracks in the asphalt. What will spring from the cracks depends on what seed is planted within them. Beautiful flowers could grow. Weeds could grow.

Modern rich nations have walled themselves in. Colonized India was a world apart from Britain. The United States exists an ocean away from the places it drone strikes. Citizenship acts as a tool of ethnic cleansing. The world, according to the new nationalists, will be a checkerboard of racially homogenous governments with swords continuously drawn. The rich nations will now literally wall themselves in, ensure their “racial purity”, and steal from the poorer nations until the end of days. At least, this is the future envisioned by the Trump/Bannon regime. This is the future governments everywhere seem to be carrying us toward, a divided people screaming in joy or anger.

The continued and sped up process of fracking Wayne National Forest, Ohio’s only national forest, fits perfectly into this worldview, or a governance in managing the cracks. The power of this world and the world our rulers wish to realize is dependent on fracking wells, oil rigs, pipelines, and energy infrastructure in general. To oppose this infrastructure is to oppose this system, to take as our starting point the cracks.

I am living in Wayne National Forest in hopes for, first and foremost, protecting the forest. I hope to crack the asphalt and plant a flower.

Everyone is welcome to join the occupation, beginning on May 12th. Everyone is welcome to visit. Everyone is welcome to participate, in one way or another, in this land defense project.

EXTENDED

Some conclude the election of Trump signals and end of the left. Though the opinion seems rushed, and forces could push for a revitalization, if true, then good riddance.

Those of the left are preoccupied with flaunting ego. Taking up their various labels- communist, socialist, anarchist, Trotskyist- seems more about themselves than any revolutionary project. The labelling urge is bureaucratic. Leftists have done themselves no favors talking like politicians. Their endless meetings bear all the marks of officialdom and red tape. Distant from daily life, they alienate those who truly seek a new world. Most meetings, not much more is accomplished than an agreement to continue having meetings. This is a hallmark of bureaucracy.

Rally after rally appears the same dead tactics and strategies. Standing on the sidewalk, holding signs, and chanting slogans at buildings will never bring change. These events only pose a threat when a variety of activity occurs, when people stop listening to the activists. This could be anything from smashing up cop cars to a group of musicians playing spur of the moment.

Supervisors hate the unplanned.

If change is sought than an understanding of the ruling structure is vital. Understanding the current arrangement takes a grasping of history. History reveals how the present came to be and such recognition provides the basis for comprehending our current world.

The first known civilization sprang up in modern day Iraq around 6,000 years ago. This did not occur because humans became smarter or more physically fit. Modern humans evolved physically around 100,000 years ago and mentally 40,000 years ago. The five main qualities of civilization are: 1. City life 2. Specialized labor 3. Control of resources above what is needed to survive by a small group, leading to 4. Class rank and 5. Government. This is still the order we face today.

Before civilizations ascendency humans organized life in various ways. One was the hunter-gatherer band. These were groups of 100 or less, usually with no formal leadership and no difference in wealth and status. These groups were mobile, never staying in one spot more than temporarily. Again, it was not due to stupidity that these people did not develop more civilized ways of living. One could argue the hunter-gatherer life promotes a general knowledge while modern society encourages a narrow, yet dense, knowledge.

Agriculture and animal domestication led to farming villages and settled life. With this came the “Trap of Sedentism.” After a few generations of village life people forgot the skills needed to live nomadically and became dependent upon the village. In general, people worked harder and longer to survive while close quarters with each other and animals increased illness. With greater access to food, the population increased.

Chiefdoms were another form of pre-civilized living. These ranked societies had various clans placed differently on the pecking order and everyone governed by a chief. The chief controlled whatever food was produced above what the village needed to survive. The chief controlled the surplus. These societies came the closest to civilized living patterns.

Agriculture’s surplus allowed more people to feast than in the hunter-gatherer band. With more people working the fields and tinkering with technology came innovation and with innovation a larger surplus. This larger surplus allowed for continued population growth. This cycle proceeded to the birth of civilization and became more rapid with its birth.

Economists have advertised the story of “barter” for a very long time, perhaps because it is so vital to their domain of study. The narrative is as follows: John owns 3 pairs of boots but needs an axe and Jane has 2 axes but needs a pair of boots. The two trade with each other to get what they want and each is trying to get the upper hand in the trade. The massive problem with this story is it is false.

Adam Smith, an economist from the late 1700s, popularized this tale and made it the basis of economics. He asserted one would find barter where money did not exist, in all cases, and pointed to aboriginal Americans as an example. When Europeans came to conquer the continent, they did not find a land of barter where money was nonexistent.

Barter took place between strangers and enemies. Within the village, one found different forms of distribution. One place may have a central hub where people add to and take from. Another place may have free gifts between them. To redo our John and Jane example, John takes Jane’s axe and Jane knows that when she needs something of John’s he will let her have free use of it. What we never find happening is barter.

This is important because the barter folktale convinces people our present system is a reasonable development. If humanity’s natural propensity is to barter, then money and profitable exchange seem like evident progression. This is not to say that barter is “unnatural”, as it came from the heads and relationships of people, but that it is not the only game in town. If it is not the only game in town, and there are a multitude of ways humanity could and has organized itself, then the current system can’t be justified as the necessary development of human nature.

So, for most of human history impersonal government power did not exist. Communities were self-sufficient and relationships were equal and local. The rise of civilization and government changed this. Dependency and inequality marked associations and the few held power over the many.

Surplus food put some in a position where they did not need to work for their survival. While most still obtained resources from the earth and survived on their labor, a few extracted supplies from the many. This small group became the wealthy ruling class and controlled the allocation of production excess. The basic relationship here is parasitic.

The smart parasite practices restrained predation, meaning it doesn’t use up all the energy of the host to maintain the host’s life and continue its own survival. The smartest parasite defends its host. Rulers learned to protect the workers for this reason and in the process increased these laborers’ dependence on them. Increasing population developed into cities and problems of coordination occurred with more people living in a single space. The ruling parasites organized social life to maintain their control of the surplus and, at the same time, rationalize the city to solve problems of communication and coordination.

State power emerged from large scale infrastructural projects as well, specifically irrigation. Irrigation is a way of diverting water from the source to fields. Large scale irrigation endeavors required thousands of people and careful utilization of raw materials. Undertaking such a plan required a small group with the technical know-how to control what labor was done, how and when it was done, how much material was needed, when and how it was used, and utilize these same networks of influence for future repairs. Large infrastructure and complex city life increased the dependency of producers on rulers.

The city is the basis of civilization. The city, simply defined, is land where too many people exist for it to be self-sufficient. It requires continuous resource importation to keep the large population alive, one that cannot live off the soil. This impersonal power, who’s structures don’t change, is based on mindless expansion outside of the city in search of resources. War, of course, is the most efficient way to grab these resources when one city’s importation search runs head on with another’s or with people who live in the way of what is sought. Conquering existed before civilization yet become perfected within its system.

Emperors emerged to rule the masses, gaining prestige from war prowess. Forming empires, these leaders ushered in a new form of rule through large territories gained in conquest. For peasants that controlled their land and were not controlled by feudal lords, they only came into contact with government once a year. Politics was centralized in the palace. Ruling families may change but this did not affect peasant lives. Without modern surveillance technologies and police institutions it was virtually impossible to continuously govern every piece of land. Peasants organized their villages on their own. The only time they saw their government was when the army was sent to collect taxes. This all changed with the rise of the nation-state and mass politics.

Feudalism was based on loyalty to the King and land distribution by the King to obedient lords. Lords, in turn, granted parts of their land to vassals under similar conditions of obedience. Governing authority was decentralized. The King was the ultimate feudal lord, but could only flex on those lords who held land from him. The entire system depended on the lord’s willingness to obey or the ability of the king to rally enough troops to crush the disobedient. This system was basically moneyless, relying on rents or food and other goods flowing up the feudal pyramid. This changed with increased commercial activity.

Buying and selling began to replace rents, with power beginning to shift to merchants and urban commercial activity in general. This change brought about the ability for Kings to monetarily tax those within their domain. Centralization was required to do this and it undermined feudal relations, that of the lord controlling his own land. Any further development of commercial activity would strengthen the monarchy over the nobility.

Changes in warfare required taxes and the creation of a permanent army. Before, Kings would call upon their lords who would rally their vassals to the King’s will. Feudal armies were small, unreliable, and war was local. With Kings increasing their revenue, they were able to hire foreign mercenaries and pay a small permanent army. If other Kings did not want to be conquered, they conformed or died. With a permanent army came a need to increase taxation for maintenance, further undermining feudalism.

Kingly taxation of the populace established a direct link between the highest governing authority and the lowest on the power chain. This completely undermined the rule of lords and centralized power into national monarchies. The primary concern of these nations was that people consented to taxes.

Another way the nation-state emerged was through city-state infighting. Dictators would rise within the city to calm civil war taking place between the rich and poor. These dictators would conquer more land and become princes. When these carved out territories fell apart, cities and other units would try to conquer each other to fill the power vacuum. Eventually, consolidation would happen and usually with the help of mercenaries. Since mercenaries held it all together, whoever controlled the national treasury had power.

When vast empires fell apart, specifically in the Middle East, there arose smaller governing units. These smaller units were concerned with conquering and so had to develop militaries. To do this they taxed the population and could only do so if the people consented, meaning they had to provide services and other incentives. Politics moved out of the palace to everywhere. The nation-state gave birth to mass-politics.

The nation-state is totalitarian by nature. It must care about what its population is doing. Government presence went from one year at tax time to being a constant. Laws upon laws developed, strictly regulating the life of the people in the borders of the nation. The daily life of the people was now bound together with the health and viability of the system. Here, we find the international system of nation-states and world market.

Peasants no longer grew food, ate it, and had a surplus. Now, they sold their food on the market, which the nation-state taxed, in exchange for money and used this money to buy food and pay taxes. Urban centers made goods for a taxable wage and the goods they made could be taxed. Imported goods from other nations could be taxed as well. Truly, all of daily life was absorbed into the system. People’s continued consent and work within new market parameters called forth the totalitarian nature of the nation-state.

Economic development led to restructuring. Small craftsmen went out of business when factory production was able to make, and therefore sell, goods cheaper and faster. These craftsmen found themselves doing unskilled and semiskilled labor on the factory floor. Before, production was individual. Those that produced a good also owned the shop and tools so it made sense that they should get all the money earned. Factory production saw creation become social, with many helping to make the goods, while payment stayed individual, with factory owners who contributed no labor gaining all the profit for simply owning the tools and the building. This is the same parasitic relationship found throughout all of civilization, just new roles and new ways for the ruling class to live off of the labor of many.

The workers movement developed in response to this, made up of various left ideologies; from Marian communism to anarchism. Regardless of ideological preference, the idea was the same. The factory was the kernel of the new world. People had been separated from the land and each other through borders, style of work, race, and a number of other things. The factory got all these different types of people together and under the same conditions. The more the factory spread, the more people were united by their similar exploitation. Eventually, they would rise up and usher in a new world based on freedom and equality.

There were problems with this. People were united in their separation. It took the imposition of an ethic by the workers movement, that all these different types of people should identify first and foremost as workers, for collective action to take place. All the workers did not have similar interests. Young white single males have much different concerns than a single black immigrant mother, regardless of being in the same factory. Obviously, government leaders and factory owners utilized these differences to their own advantage by privileging some groups over others. The slogan “An Injury to One is an Injury to All” was based more on faith than fact.

The workers movement saw the factory’s mass employment with hope as well. With massive profits, owners would reinvest this money in machines and other tools. Needing people to work the new equipment, they hired. Selling more products, created more efficiently, led to more profits and the cycle continued. As the factory system expanded, it was believed capitalism was bringing its own collapse. More were being united by a common state, that of the worker, and eventually their false separations would subside. They would see each other as the same, regardless of creed or color, see their true enemy in the factory owners and their government, and revolt.

For this reason, the workers movement advocated the expansion of the factory in a policy called “proletarianization.” When the Bolshevik Communists came to power in Russia, their main concern was to industrialize the nation for this purpose, similar to the rise of Communist governments elsewhere. One could ask the obvious question: Would spreading the factory system and the working-class condition really bring its end? Would spreading the plantation system and the slave condition end slavery, or strengthen it?

If Trump is the end of the left, good riddance.

The conditions that brought about the original workers movement have changed, yet the left seems blind to this or prepares mental gymnastics. For starters, the current economy is deindustrializing in America and post-industrial worldwide. Even in current industrial powerhouses like China and India, employment rates and growth from a former period are not found. For the United States, Europe, and the West in general, there is no real industrial manufacturing base. This type of work only happens in the colonized world or prison. It’s only sweatshops of various types in different spaces.

In fact, it may even be fallacious to speak of a “colonized” world. The nation-state seems not to matter anymore. A new, global system has developed. Transnational corporations organize social life, almost everywhere, to operate for the creation of value. Every facebook post made, every online search informs advertisers and helps business adapt their products. The spending habits tracked on your debit card help to know who you are and what type of products you like. One’s interaction with the current world contributes to value creation. In other words, production has moved from the workplace to all of life and has only been possible with modern communication technology and the new post-industrial economy.

The workers now are not the same as the past in this country or countries similarly situated. The left, when admitting that things have changed, will then perform backflips to also claim nothing has changed. The service sector has come to dominate, yet the left holds its orientation to be exactly the same as the factory. I was discussing this with a Trotskyist friend that worked a service sector job at a burrito joint. Since they were still payed a wage, he claimed, the form of capitalist exploitation had not changed.

Taking the example of the burrito joint, the harvesting of the lettuce, tomatoes, and other food items used to make those burritos most likely occurred in another underdeveloped country or by migrants or prisoners in this country. They receive wages much lower than those in the service sector (usually) and their labor is more vital to the economic set-up than those performing easily automated service jobs. If they did not harvest the food, my Trotskyist friend would have no lettuce to put on someone’s burrito. Building a burrito is not the same as building a highway, car, skyscraper, or harvesting fields. No kernel of a new world can be seen within this type of work, other than someone akin to a psychiatric ward.

So, how will a better world be brought about? I think anyone who believes they know the answer to this question is arrogant and needs to come back down to earth. I certainly do not know the answer. I will provide some thoughts to help answer this question.

Every single revolution has failed. The French Revolution, American Revolution, Russian Revolution, the list goes on, all have failed to usher in a world that has ended the few dominating the many. To hang on to these past conceptions of revolution is to condemn the next one to loss. This means a rethinking of fundamental questions is needed.

What does revolutionary action succeed at doing?

First and foremost, it succeeds at establishing a set of values within a subversive context. Courageousness is a good thing to find in the hearts of people, yet the soldier who goes to fight and die is “courageous.” The last goal of revolutionary action is to get people to join the armed forces. An insurrectionary act affirms notions of justice, courage, honor, right and wrong, freedom, kindness, empathy, etc. that completely negate the selfishness, materialism, and overall toxicity of the dominant values.

This is where anarchists who fetishize violence get off the mark. Simply put, just because we burn everything to the ground does not mean people stop being assholes. This is not to say, however, that these values won’t get affirmed in riots and the like. Who could say those in Ferguson, Baltimore, and many other places were not courageous with deep notions of justice, right and wrong, and freedom? These values can also be affirmed through wise grandparents going on a hike with their grandson, a teacher who treats her students as equals, a victim who stands up to their bully, a group of musicians playing carefree, a ropeswing and a group of good people, graffiti, sharing a smoke, stealing from Walmart, fighting mobilized Nazis, and many other ways.

Revolutionary action does not just happen at a march or political meeting. I’d go so far as to argue it happens at these places less of the time.

Secondly, it succeeds in taking space to keep these values and energy going. It takes space and organizes the shared life within it in a completely new way. It may even be wrong to describe this as “organized.

When hegemonic powers fall apart, power REALLY does go back to individual people. Depending on how we relate to each other flowers or weeds could grow. What seed is planted in the cracks?

How is power disrupted?

From here, we can look to the most interesting struggle to occur in the United States in many years: Standing Rock. For all its problems, the Standing Rock resistance highlighted some important things. Power is found in infrastructure. The construction of the pipeline only strengthens the world of pipelines and oil dependency. These constructions, from oil pipelines to highways to electrical system to fracking equipment, help keep this world running. Those of us who went to Standing Rock and stayed with a certain group in Sacred Stone saw the banner: “Against the Pipeline and Its World!”

Standing Rock had one camp that sat in the path of the pipeline to black construction, until forcibly removed by the police, and camps across the river. This struggle blocked the construction of a world it did not want to see and built one it did right in the space it captured. It had its own food supply, water supply, etc. It had its own logistical system, outside of government and business. It relied on the power of people.

During the Occupy movement, it seemed natural for those in Oakland to block the port. The port brought in commodities to be sold, benefitting the rich and propping up the system. It seemed like common sense for revolutionaries in Egypt to take Tahrir Square, the center of activity, block main roads, stopping people from shopping and working, and burn police stations. In fact, focusing on Tahrir square misses all the blocked roads and burnt police stations all across Egypt.

The reflex seems to be to block the flows of this world and construct new ones, to block on form of life and build many new forms.

Why do revolutions fail?

There is no good answer to this.

One reason revolt fails to materialize (among many) is activity gets pacified by liberals. This, again, could be seen at Standing Rock where those apart of “Spirit Camps” put their bodies between police and “Warrior Camps”, telling them to demobilize, leave the initiated conflict, and pray. This can be seen when liberals demask covered protesters trying to push things or these liberals even pepper spraying them when they nonviolently damage property.

Following this, one of the most inspiring revolutions in the last 100 years was snuffed out by revolutionaries giving up their power, believing it was strategic. Within the Spanish Civil War, workers in Barcelona, Aargon, and other urban and rural places took over the land and factories, abolished the government and money, and armed themselves. They subordinated themselves to Republican government authority in the belief they could win the fight against the fascists by doing this.

What was shown from this is the Republican government was no more capable of fighting fascists than autonomous armed workers. The workers should of trusted no one but themselves, being repressed by both Republican and Communist henchmen to fall in line. Both of these forces reintroduced market mechanisms and money, government authority, and other ways the few rule over the many. Contrary to their claims, these efforts did not make fighting the war any more efficient and in some ways, especially the reintroduction of market forces into the food supply, it made things much worse. In the end, the fascists still won.

The problem here is viewing the conflict in purely military terms instead of a social war. By falling in line with Republican government and military command, those in Barcelona and other places allowed them to organize social life and overall just laid the groundwork for a fascist organization. Their self organization should’ve never been sacrificed.

When revolutionaries forget their struggle is more than a military confrontation they become exactly what they are fighting against. They become their enemy. They also miss inspiring movements due to fetishizing combat. We heard the left praise the fight of Kurdish women in Rojava against ISIS, and justifiably so, yet hear nothing about grassroots councils that have sprung up and continue to survive all across Syria in spite of horrible civil war. With the collapse of the Assad dictatorship, these councils took on the role of getting electricity, distributing food and water, healing the sick and injured, and whatever else is necessary to life.

I have spoken in generalities mainly, attempting to adequately explain my reasoning yet not over complicate and bore.

Over 700 acres of the Wayne National Forest have been auctioned off to the with hydrofracturing intentions. The Wayne is not new to gas and energy exploitation, yet this is a new and intensified maneuver in the war on Ohio’s only national forest. The plan from the Bureau of Land Management is to continue resource extraction until it’s all gone and The Wayne is dead. Some people will make a profit, though…

I will live in Wayne National Forest, in a long-term occupation starting on May 12th, in hopes of changing this tide. While it would be interesting for this to fit into some wider narrative of struggle, and in some ways it naturally does, that is not my main concern. My main concern is stopping the continued energy industry’s attack on the forest.

To anyone who has resonated with what’s been written, who sees this battle as their battle, and who believes they can help, PLEASE GET INVOLVED.

EVERYONE IS WELCOME TO COME.

To read:

– Affirming Gasland by the creators of the documentary Gasland

– 1984 by George Orwell

– The Madman: His Parables and Poems by Kahlil Gibran

– The Great Divorce by C.S. Lewis

– The Worst Mistake in the History of the Human Race by Jared Diamond

– What is Civilization? by John Haywood (found in The Penguin Historical Atlas of Ancient Civilizations)

– Debt by David Graeber

– To Our Friends by The Invisible Committee

To watch:

– Gasland

– Gasland 2

The Strange Persistence of Guilt

Those of us living in the developed countries of the West find ourselves in the tightening grip of a paradox, one whose shape and character have so far largely eluded our understanding. It is the strange persistence of guilt as a psychological force in modern life. If anything, the word persistence understates the matter. Guilt has not merely lingered. It has grown, even metastasized, into an ever more powerful and pervasive element in the life of the contemporary West, even as the rich language formerly used to define it has withered and faded from discourse, and the means of containing its effects, let alone obtaining relief from it, have become ever more elusive.

This paradox has set up a condition in which the phenomenon of rising guilt becomes both a byproduct of and an obstacle to civilizational advance. The stupendous achievements of the West in improving the material conditions of human life and extending the blessings of liberty and dignity to more and more people are in danger of being countervailed and even negated by a growing burden of guilt that poisons our social relations and hinders our efforts to live happy and harmonious lives.

I use the words strange persistence to suggest that the modern drama of guilt has not followed the script that was written for it. Prophets such as Friedrich Nietzsche were confident that once the modern Western world finally threw off the metaphysical straitjacket that had confined the possibilities of all previous generations, the moral reflexes that had accompanied that framework would disappear along with them. With God dead, all would indeed be permitted. Chief among the outmoded reflexes would be the experience of guilt, an obvious vestige of irrational fear promulgated by oppressive, life-denying institutions erected in the name and image of a punitive deity.

Indeed, Nietzsche had argued in On the Genealogy of Morality (1887), a locus classicus for the modern understanding of guilt, that the very idea of God, or of the gods, originated hand-in-hand with the feeling of indebtedness (the German Schuld—“guilt”—being the same as the word for “debt,” Schulden).1 The belief in God or gods arose in primitive societies, Nietzsche speculated, out of dread of the ancestors and a feeling of indebtedness to them. This feeling of indebtedness expanded its hold, in tandem with the expansion of the concept of God, to the point that when the Christian God offered itself as “the maximal god yet achieved,” it also brought about “the greatest feeling of indebtedness on earth.”

But “we have now started in the reverse direction,” Nietzsche exulted. With the “death” of God, meaning God’s general cultural unavailability, we should expect to see a consequent “decline in the consciousness of human debt.” With the cultural triumph of atheism at hand, such a victory could also “release humanity from this whole feeling of being indebted towards its beginnings, its prima causa.” Atheism would mean “a second innocence,” a regaining of Eden with neither God nor Satan there to interfere with and otherwise corrupt the proceedings.2

This is not quite what has happened; nor does there seem to be much likelihood that it will happen, in the near future. Nietzsche’s younger contemporary Sigmund Freud has proven to be the better prophet, having offered a dramatically different analysis that seems to have been more fully borne out. In his book Civilization and Its Discontents (Das Unbehagen in der Kultur), Freud declared the tenacious sense of guilt to be “the most important problem in the development of civilization.” Indeed, he observed, “the price we pay for our advance in civilization is a loss of happiness through the heightening of the sense of guilt.”3

Such guilt was hard to identify and hard to understand, though, since it so frequently dwelled on an unconscious level, and could easily be mistaken for something else. It often appears to us, Freud argued, “as a sort of malaise [Unbehagen], a dissatisfaction,”4 for which people seek other explanations, whether external or internal. Guilt is crafty, a trickster and chameleon, capable of disguising itself, hiding out, changing its size and appearance, even its location, all the while managing to persist and deepen.

This seems to me a very rich and incisive description, and a useful starting place for considering a subject almost entirely neglected by historians: the steadily intensifying (although not always visible) role played by guilt in determining the structure of our lives in the twentieth and twenty-first centuries. By connecting the phenomenon of rising guilt to the phenomenon of civilizational advance, Freud was pointing to an unsuspected but inevitable byproduct of progress itself, a problem that will only become more pronounced in the generations to come.

Demoralizing Guilt

Thanks in part to Freud’s influence, we live in a therapeutic age; nothing illustrates that fact more clearly than the striking ways in which the sources of guilt’s power and the nature of its would-be antidotes have changed for us. Freud sought to relieve in his patients the worst mental burdens and pathologies imposed by their oppressive and hyperactive consciences, which he renamed their superegos, while deliberately refraining from rendering any judgment as to whether the guilty feelings ordained by those punitive superegos had any moral justification. In other words, he sought to release the patient from guilt’s crushing hold by disarming and setting aside guilt’s moral significance, and re-designating it as just another psychological phenomenon, whose proper functioning could be ascertained by its effects on one’s more general well-being. He sought to “demoralize” guilt by treating it as a strictly subjective and emotional matter.

Health was the only remaining criterion for success or failure in therapy, and health was a functional category, not an ontological one. And the nonjudgmental therapeutic worldview whose seeds Freud planted has come into full flower in the mainstream sensibility of modern America, which in turn has profoundly affected the standing and meaning of the most venerable among our moral transactions, and not merely matters of guilt.

Take, for example, the various ways in which “forgiveness” is now understood. Forgiveness is one of the chief antidotes to the forensic stigma of guilt, and as such has long been one of the golden words of our culture, with particularly deep roots in the Christian tradition, in which the capacity for forgiveness is seen as a central attribute of the Deity itself. In the face of our shared human frailty, forgiveness expresses a kind of transcendent and unconditional regard for the humanity of the other, free of any admixture of interest or punitive anger or puffed-up self-righteousness. Yet forgiveness rightly understood can never deny the reality of justice. To forgive, whether one forgives trespasses or debts, means abandoning the just claims we have against others, in the name of the higher ground of love. Forgiveness affirms justice even in the act of suspending it. It is rare because it is so costly.

In the new therapeutic dispensation, however, forgiveness is all about the forgiver, and his or her power and well-being. We have come a long way from Shakespeare’s Portia, who spoke so memorably in The Merchant of Venice about the unstrained “quality of mercy,” which “droppeth as the gentle rain from heaven” and blesses both “him that gives and him that takes.”5 And an even longer way from Christ’s anguished cry from the cross, “Forgive them, for they know not what they do.”6 And perhaps even further yet from the most basic sense of forgiveness, the canceling of a monetary debt or the pardoning of a criminal offense, in either case a very conscious suspension of the entirely rightful demands of justice.

We still claim to think well of forgiveness, but it has in fact very nearly lost its moral weight by having been translated into an act of random kindness whose chief value lies in the sense of personal release it gives us. “Forgiveness,” proclaimed the journalist Gregg Easterbrook writing at Beliefnet, “is good for your health.”7 Like the similar acts of confession or apology, and other transactions in the moral economy of sin and guilt, forgiveness is in danger of being debased into a kind of cheap grace, a waiving of standards entirely, standards without which such transactions have little or no moral significance. Forgiveness only makes sense in the presence of a robust conception of justice. Without that, it is in real danger of being reduced to something passive and automatic and flimsy—a sanctimonious way of saying that nothing really matters very much at all.

The Infinite Extensibility of Guilt

The therapeutic view of guilt seems to offer the guilt-ridden an avenue of escape from its power, by redefining guilt as the result of psychic forces that do not relate to anything morally consequential. But that has not turned out to be an entirely workable solution, since it is not so easy to banish guilt merely by denying its reality. There is another powerful factor at work too, one that might be called the infinite extensibility of guilt. This proceeds from a very different set of assumptions, and is a surprising byproduct of modernity’s proudest achievement: its ceaselessly expanding capacity to comprehend and control the physical world.

In a world in which the web of relationships between causes and effects yields increasingly to human understanding and manipulation, and in which human agency therefore becomes ever more powerful and effective, the range of our potential moral responsibility, and therefore of our potential guilt, also steadily expands. We like to speak, romantically, of the interconnectedness of all things, failing to recognize that this same principle means that there is almost nothing for which we cannot be, in some way, held responsible. This is one inevitable side effect of the growing movement to change the name of our geological epoch from the Holocene to the Anthropocene—the first era in the life of the planet to be defined by the effects of the human presence and human power: effects such as nuclear fallout, plastic pollution, domesticated animals, and anthropogenic climate change. Power entails responsibility, and responsibility leads to guilt.

I can see pictures of a starving child in a remote corner of the world on my television, and know for a fact that I could travel to that faraway place and relieve that child’s immediate suffering, if I cared to. I don’t do it, but I know I could. Although if I did so, I would be a well-meaning fool like Dickens’s ludicrous Mrs. Jellyby, who grossly neglects her own family and neighborhood in favor of the distant philanthropy of African missions. Either way, some measure of guilt would seem to be my inescapable lot, as an empowered man living in an interconnected world.

Whatever donation I make to a charitable organization, it can never be as much as I could have given. I can never diminish my carbon footprint enough, or give to the poor enough, or support medical research enough, or otherwise do the things that would render me morally blameless.

Colonialism, slavery, structural poverty, water pollution, deforestation—there’s an endless list of items for which you and I can take the rap. To be found blameless is a pipe dream, for the demands on an active conscience are literally as endless as an active imagination’s ability to conjure them. And as those of us who teach young people often have occasion to observe, it may be precisely the most morally perceptive and earnest individuals who have the weakest common-sense defenses against such overwhelming assaults on their over-receptive sensibilities. They cannot see a logical place to stop. Indeed, when any one of us reflects on the brute fact of our being alive and taking up space on this planet, consuming resources that could have met some other, more worthy need, we may be led to feel guilt about the very fact of our existence.

The questions involved are genuine and profound; they deserve to be asked. Those who struggle most deeply with issues of environmental justice and stewardship are often led to wonder whether there can be any way of life that might allow one to escape being implicated in the cycles of exploitation and cruelty and privilege that mark, ineluctably, our relationship with our environment. They suffer from a hypertrophied sense of guilt, and desperately seek some path to an existence free of it.

In this, they embody a tendency of the West as a whole, expressed in an only slightly exaggerated form. So excessive is this propensity toward guilt, particularly in the most highly developed nations of the Western world, that the French writer Pascal Bruckner, in a courageous and brilliant recent study called The Tyranny of Guilt (in French, the title is the slightly different La tyrannie de la pénitence), has identified the problem as “Western masochism.” The lingering presence of “the old notion of original sin, the ancient poison of damnation,” Bruckner argues, holds even secular philosophers and sociologists captive to its logic.8

For all its brilliance, though, Bruckner’s analysis is not fully adequate. The problem goes deeper than a mere question of alleged cultural masochism arising out of vestigial moral reflexes. It is, after all, not merely our pathologies that dispose us in this direction. The pathologies themselves have an anterior source in the very things that make us proudest: our knowledge of the world, of its causes and effects, and our consequent power to shape and alter those causes and effects. The problem is perfectly expressed in T.S. Eliot’s famous question “After such knowledge, what forgiveness?”9 In a world of relentlessly proliferating knowledge, there is no easy way of deciding how much guilt is enough, and how much is too much.

Stolen Suffering

Notwithstanding all claims about our living in a post-Christian world devoid of censorious public morality, we in fact live in a world that carries around an enormous and growing burden of guilt, and yearns—sometimes even demands—to be free of it. About this, Bruckner could not have been more right. And that burden is always looking for an opportunity to discharge itself. Indeed, it is impossible to exaggerate how many of the deeds of individual men and women can be traced back to the powerful and inextinguishable need of human beings to feel morally justified, to feel themselves to be “right with the world.” One would be right to expect that such a powerful need, nearly as powerful as the merely physical ones, would continue to find ways to manifest itself, even if it had to do so in odd and perverse ways.

Which brings me to a very curious story, full of significance for these matters. It comes from a New York Times op-ed column by Daniel Mendelsohn, published on March 9, 2008, and aptly titled “Stolen Suffering.”10 Mendelsohn, a Bard College professor who had written a book about his family’s experience of the Holocaust, told of hearing the story of an orphaned Jewish girl who trekked 2,000 miles from Belgium to Ukraine, surviving the Warsaw ghetto, murdering a German officer, and taking refuge in forests where she was protected by kindly wolves. The story had been given wide circulation in a 1997 book, Misha: A Mémoire of the Holocaust Years, and its veracity was generally accepted. But it was eventually discovered to be a complete fabrication, created by a Belgian Roman Catholic named Monique De Wael.11

Such a deception, Mendelsohn argued, is not an isolated event. It needs to be understood in the context of a growing number of “phony memoirs,” such as the notorious child-survivor Holocaust memoir Fragments, or Love and Consequences, the putative autobiography of a young mixed-race woman raised by a black foster mother in gang-infested Los Angeles.12 These books were, as Mendelsohn said, “a plagiarism of other people’s trauma,” written not, as their authors claimed, “by members of oppressed classes (the Jews during World War II, the impoverished African-Americans of Los Angeles today), but by members of relatively safe or privileged classes.” Interestingly, too, he noted that the authors seemed to have an unusual degree of identification with their subjects—indeed, a degree of identification approaching the pathological. Defending Misha, De Wael declared, astonishingly, that “the story is mine…not actually reality, but my reality, my way of surviving.”13

What these authors have appropriated is suffering, and the identification they pursue is an identification not with certifiable heroes but with certifiable victims. It is a particular and peculiar kind of identity theft. How do we account for it? What motivates it? Why would comfortable and privileged people want to identify with victims? And why would their efforts appeal to a substantial reading public?

Or, to pose the question even more generally, in a way that I think goes straight to the heart of our dilemma: How can one account for the rise of the extraordinary prestige of victims, as a category, in the contemporary world?

I believe that the explanation can be traced back to the extraordinary weight of guilt in our time, the pervasive need to find innocence through moral absolution and somehow discharge one’s moral burden, and the fact that the conventional means of finding that absolution—or even of keeping the range of one’s responsibility for one’s sins within some kind of reasonable boundaries—are no longer generally available. Making a claim to the status of certified victim, or identifying with victims, however, offers itself as a substitute means by which the moral burden of sin can be shifted, and one’s innocence affirmed. Recognition of this substitution may operate with particular strength in certain individuals, such as De Wael and her fellow hoaxing memoirists. But the strangeness of the phenomenon suggests a larger shift of sensibility, which represents a change in the moral economy of sin. And almost none of it has occurred consciously. It is not something as simple as hypocrisy that we are seeing. Instead, it is a story of people working out their salvation in fear and trembling.

The Moral Economy of Sin

In the modern West, the moral economy of sin remains strongly tied to the Judeo-Christian tradition, and the fundamental truth about sin in the Judeo-Christian tradition is that sin must be paid for or its burden otherwise discharged. It can neither be dissolved by divine fiat nor repressed nor borne forever. In the Jewish moral world in which Christianity originated, and without which it would have been unthinkable, sin had always had to be paid for, generally by the sacrificial shedding of blood; its effects could never be ignored or willed away. Which is precisely why, in the Christian context, forgiveness of sin was specifically related to Jesus Christ’s atoning sacrifice, his vicarious payment for all human sins, procured through his death on the cross and made available freely to all who embraced him in faith. Forgiveness has a stratospherically high standing in the Christian faith. But it is grounded in fundamental theological and metaphysical beliefs about the person and work of Christ, which in turn can be traced back to Jewish notions of sin and how one pays for it. It makes little sense without them. Forgiveness, or expiation, or atonement—all of these concepts promising freedom from the weight of guilt are grounded in a moral transaction, enacted within the universe of a moral economy of sin.

But in a society that retains its Judeo-Christian moral reflexes but has abandoned the corresponding metaphysics, how can the moral economy of sin continue to operate properly, and its transactions be effectual? Can a credible substitute means of discharging the weight of sin be found? One workable way to be at peace with oneself and feel innocent and “right with the world” is to identify oneself as a certifiable victim—or better yet, to identify oneself with victims. This is why the Mendelsohn story is so important and so profoundly indicative, even if it deals with an extreme case. It points to the way in which identification with victims, and the appropriation of victim status, has become an irresistible moral attraction. It suggests the real possibility that claiming victim status is the sole sure means left of absolving oneself and securing one’s sense of fundamental moral innocence. It explains the extraordinary moral prestige of victimhood in modern America and Western society in general.

Why should that be so? The answer is simple. With moral responsibility comes inevitable moral guilt, for reasons already explained. So if one wishes to be accounted innocent, one must find a way to make the claim that one cannot be held morally responsible. This is precisely what the status of victimhood accomplishes. When one is a certifiable victim, one is released from moral responsibility, since a victim is someone who is, by definition, not responsible for his condition, but can point to another who is responsible.

But victimhood at its most potent promises not only release from responsibility, but an ability to displace that responsibility onto others. As a victim, one can project onto another person, the victimizer or oppressor, any feelings of guilt he might harbor, and in projecting that guilt lift it from his own shoulders. The result is an astonishing reversal, in which the designated victimizer plays the role of the scapegoat, upon whose head the sin comes to rest, and who pays the price for it. By contrast, in appropriating the status of victim, or identifying oneself with victims, the victimized can experience a profound sense of moral release, of recovered innocence. It is no wonder that this has become so common a gambit in our time, so effectively does it deal with the problem of guilt—at least individually, and in the short run, though at the price of social pathologies in the larger society that will likely prove unsustainable.

Grievance—and Penitence—on a Global Scale

All of this confusion and disruption to our most time-honored ways of handling the dispensing of guilt and absolution creates enormous problems, especially in our public life, as we assess questions of social justice and group inequities, which are almost impossible to address without such morally charged categories coming into play. Just look at the incredible spectacle of today’s college campuses, saturated as they are with ever-more-fractured identity politics, featuring an ever-expanding array of ever-more-minute grievances, with accompanying rounds of moral accusation and declarations of victimhood. These phenomena are not merely a fad, and they did not come out of nowhere.

Similar categories also come into play powerfully when the issues in question are ones relating to matters such as the historical guilt of nations and their culpability or innocence in the international sphere. Such questions are ubiquitous, as never before.

In the words of political scientist Thomas U. Berger, “We live in an age of apology and recrimination,” and he could not be more right.14 Guilt is everywhere around us, and its potential sources have only just begun to be plumbed, as our understanding of the buried past widens and deepens.

Gone is the amoral Hobbesian notion that war between nations is merely an expression of the state of nature. The assignment of responsibility for causing a war, the designation of war guilt, the assessment of punishments and reparations, the identification and prosecution of war crimes, the compensation of victims, and so on—all of these are thought to be an essential part of settling a war’s effects justly, and are part and parcel of the moral economy of guilt as it now operates on the national and international levels.

The heightened moral awareness we now bring to international affairs is something new in human history, stemming from the growing social and political pluralism of Western democracies and the unprecedented influence of universalized norms of human rights and justice, supported and buttressed by a robust array of international institutions and nongovernmental organizations ranging from the International Criminal Court to Amnesty International.

In addition, the larger narratives through which a nation organizes and relates its history, and through which it constitutes its collective memory, are increasingly subject to monitoring and careful scrutiny by its constituent ethnic, linguistic, cultural, and other subgroups, and are responsive to demands that those histories reflect the nation’s past misdeeds and express contrition for them. Never has there been a keener and more widespread sense of particularized grievances at work throughout in the world, and never have such grievances been able to count on receiving such a thorough and generally sympathetic hearing from scholars and the general public.

Indeed, it is not an exaggeration to say that one could not begin to understand the workings of world politics today without taking into account a whole range of morally charged questions of guilt and innocence. How can one fully understand the decision by Chancellor Angela Merkel to admit a million foreign migrants a year into Germany without first understanding how the powerfully the burden of historical guilt weighs upon her and many other Germans? Such factors are now as much a part of historical causation and explanation as such standbys as climate, geography, access to natural resources, demographics, and socioeconomic organization.

There is no disputing the fact, then, that history itself, particularly in the form of “coming to terms with” the wrongs of the past and the search for historical justice, is becoming an ever more salient element in national and international politics. We see it in the concern over past abuses of indigenous peoples, colonized peoples, subordinated races and classes, and the like, and we see it in the ways that nations relate their stories of war. Far from being buried, the past has become ever more alive with moral contestation.

Perhaps the most impressive example of sustained collective penitence in human history has come from the government and people of Germany, who have done so much to atone for the sins of Nazism. But how much penitence is enough? And how long must penance be done? When can we say that the German people—who are, after all, an almost entirely different cast of characters from those who lived under the Nazis—are free and clear, and have “paid their debt” to the world and to the past, and are no longer under a cloud of suspicion? Who could possibly make that judgment? And will there come a day—indeed, has it already arrived, with the nation’s backlash against Chancellor Merkel’s immigration blunders?—when the Germans have had enough of the Sisyphean guilt which, as it may seem to them, they have been forced by other sinful nations to bear, and begin to seek their redemption by other means?

Who, after all, has ever been pure and wise enough to administer such postwar justice with impartiality and detachment, and impeccable moral credibility? What nation or entity at the close of World War II was sufficiently without sin to cast the decisive stone? The Nuremberg and Tokyo war crimes trials were landmarks in the establishment of institutional entities administering and enforcing international law. But they also were of questionable legality, reflecting the imposition of ad hoc, ex post facto laws, administered by victors whose own hands were far from entirely clean (consider the irony of Soviet judges sitting in judgment of the same kinds of crimes their own regime committed with impunity)—indeed, victors who might well have been made to stand trial themselves, had the tables been turned, and the subject at hand been the bombing of civilian targets in Hiroshima and Dresden.

Or consider whether the infamous Article 231 in the Treaty of Versailles, assigning “guilt” to Germany for the First World War, was not, in the very attempt to impose the victor’s just punishment on a defeated foe, itself an act of grave injustice, the indignity of which surely helped to precipitate the catastrophes that followed it. The assignment of guilt, especially exclusive guilt, to one party or another may satisfy the most urgent claims of justice, or the desire for retribution, but may fail utterly the needs of reconciliation and reconstruction. As Elazar Barkan bluntly argued in his book The Guilt of Nations, “In forcing an admission of war guilt at Versailles, rather than healing, the victors instigated resentment that contributed to the rise of Fascism.”15 The work of healing, like the work of the Red Cross, has a claim all its own, one that is not always compatible with the utmost pursuit of justice (although it probably cannot succeed in the complete absence of such a pursuit). Nor does such an effort to isolate and assign exclusive guilt meet the needs of a more capacious historical understanding, one that understands, as Herbert Butterfield once wrote, that history is “a clash of wills out of which there emerges something that no man ever willed.”16 And, he might have added, in which no party is entirely innocent.

So once again we find ourselves confronting the paradox of sin that cannot be adequately expiated. The deeply inscribed algorithm of sin demands some kind of atonement, but for some aspects of the past there is no imaginable way of making that transaction without creating new sins of equivalent or greater dimension. What possible atonement can there be for, say, the institution of slavery? It is no wonder that the issue of reparations for slavery surfaces periodically, and probably always will, yet it is simply beyond the power of the present or the future to atone for the sins of the past in any effective way. Those of us who teach history, and take seriously the moral formation of our students, have to consider what the takeaway from this is likely to be. Do we really want to rest easy with the idea that a proper moral education needs to involve a knowledge of our extensive individual and collective guilt—a guilt for which there is no imaginable atonement? That this is not a satisfactory state of affairs would seem obvious; what to do about it, particularly in a strictly secular context, is another matter.

Again, the question arises whether and to what extent all of this has something to do with our living in a world that has increasingly, for the past century or so, been run according to secular premises, using a secular vocabulary operating within an “immanent frame”—a mode of operation that requires us to be silent about, and forcibly repress, the very religious frameworks and vocabularies within which the dynamics of sin and guilt and atonement have hitherto been rendered intelligible. I use the term “repress” here with some irony, given its Freudian provenance. But even the irreligious Freud did not envision the “liberation” of the human race from its religious illusions as an automatic and sufficient solution to its problems. He saw nothing resembling a solution. Indeed, it could well be the case, and paradoxically so, that just at the moment when we have become more keenly aware than ever of the wages of sin in the world, and more keenly anxious to address those sins, we find ourselves least able to describe them in those now-forbidden terms, let alone find moral release from their weight. Andrew Delbanco puts it quite well in his perceptive and insightful 1995 book The Death of Satan:

We live in the most brutal century in human history, but instead of stepping forward to take the credit, the devil has rendered himself invisible. The very notion of evil seems to be incompatible with modern life, from which the ideas of transgression and the accountable self are fast receding. Yet despite the loss of old words and moral concepts—Satan, sin, evil—we cannot do without some conceptual means for thinking about the universal human experience of cruelty and pain…. If evil, with all its insidious complexity, escape the reach of our imagination, it will have established dominion over us all.17

So there are always going to be consequences attendant upon the disappearance of such words, and they may be hard to foresee, and hard to address. “Whatever became of sin?” asked the psychiatrist Karl Menninger, in his 1973 book of that title. What, in the new arrangements, can accomplish the moral and transactional work that was formerly done by the now-discarded concepts? If, thanks to Nietzsche, the absence of belief in God is “the notional condition of modern Western culture,” as Paula Fredriksen argues in her study of the history of the concept of sin, doesn’t that mean that the idea of sin is finished too?18

Yes, it would seem to mean just that. After all, “sin” cannot be understood apart from a larger context of ideas. So what happens when all the ideas that upheld “sin” in its earlier sense have ceased to be normatively embraced? Could not the answer to Menninger’s question be something like Zarathustra’s famous cry: “Sin is dead and we have killed it!”?

Sin is a transgression against God, and without a God, how can there be such a thing as sin? So the theory would seem to dictate. But as Fredriksen argues, that theory fails miserably to explain the world we actually inhabit. Sin lives on, it seems, even if we decline to name it as such. We live, she says, in the web of culture, and “the biblical god…seems to have taken up permanent residence in Western imagination…[so much so that] even nonbelievers seem to know exactly who or what it is that they do not believe in.”19 In fact, given the anger that so many nonbelievers evince toward this nonexistent god, one might be tempted to speculate whether their unconscious cry is “Lord, I do not believe; please strengthen my belief in your nonexistence!” Such was Nietzsche’s genius in communicating how difficult an achievement a clean and unconditional atheism is, a conundrum that he captured not by asserting that God does not exist, but that God is dead. For the existence of the dead constitutes, for us, a presence as well as an absence. It is not so easy to wish that enduring presence away, particularly when there is the lingering sense that the presence was once something living and breathing.

What makes the situation dangerous for us, as Fredriksen observes, is not only the fact that we have lost the ability to make conscious use of the concept of sin but that we have also lost any semblance of a “coherent idea of redemption,”20the idea that has always been required to accompany the concept of sin in the past and tame its harsh and punitive potential. The presence of vast amounts of unacknowledged sin in a culture, a culture full to the brim with its own hubristic sense of world-conquering power and agency but lacking any effectual means of achieving redemption for all the unacknowledged sin that accompanies such power: This is surely a moral crisis in the making—a kind of moral-transactional analogue to the debt crisis that threatens the world’s fiscal and monetary health. The rituals of scapegoating, of public humiliation and shaming, of multiplying morally impermissible utterances and sentiments and punishing them with disproportionate severity, are visibly on the increase in our public life. They are not merely signs of intolerance or incivility, but of a deeper moral disorder, an Unbehagen that cannot be willed away by the psychoanalytic trick of pretending that it does not exist.

The Persistence of Guilt

Where then does this analysis of our broken moral economy leave us? The progress of our scientific and technological knowledge in the West, and of the culture of mastery that has come along with it, has worked to displace the cultural centrality of Christianity and Judaism, the great historical religions of the West. But it has not been able to replace them. For all its achievements, modern science has left us with at least two overwhelmingly important, and seemingly insoluble, problems for the conduct of human life. First, modern science cannot instruct us in how to live, since it cannot provide us with the ordering ends according to which our human strivings should be oriented. In a word, it cannot tell us what we should live for, let alone what we should be willing to sacrifice for, or die for.

And second, science cannot do anything to relieve the guilt weighing down our souls, a weight to which it has added appreciably, precisely by rendering us able to be in control of, and therefore accountable for, more and more elements in our lives—responsibility being the fertile seedbed of guilt. That growing weight seeks opportunities for release, seeks transactional outlets, but finds no obvious or straightforward ones in the secular dispensation. Instead, more often than not we are left to flail about, seeking some semblance of absolution in an incoherent post-Christian moral economy that has not entirely abandoned the concept of sin but lacks the transactional power of absolution or expiation without which no moral system can be bearable.

What is to be done? One conclusion seems unavoidable. Those who have viewed the obliteration of religion, and particularly of Judeo-Christian metaphysics, as the modern age’s signal act of human liberation need to reconsider their dogmatic assurance on that point. Indeed, the persistent problem of guilt may open up an entirely different basis for reconsidering the enduring claims of religion. Perhaps human progress cannot be sustained without religion, or something like it, and specifically without something very like the moral economy of sin and absolution that has hitherto been secured by the religious traditions of the West.

Such an argument would have little to do with conventional theological apologetics. Instead, it would draw from empirical realities regarding the social and psychological makeup of advanced Western societies. And it would fully face the fact that, without the support of religious beliefs and institutions, one may have no choice but to accept the dismal prospect envisioned by Freud, in which the advance of human civilization brings not happiness but a mounting tide of unassuaged guilt, ever in search of novel and ineffective, and ultimately bizarre, ways to discharge itself. Such an advance would steadily diminish the human prospect, and render it less and less sustainable. It would smother the energies of innovation that have made the West what it is, and fatally undermine the spirited confidence needed to uphold the very possibility of progress itself. It must therefore be countered. But to be countered, it must first be understood.

Endnotes

  1. The discussion that follows is drawn from the second essay in Friedrich Nietzsche, On the Genealogy of Morality, ed. Keith Ansell-Pearson, trans. Carol Diethe (Cambridge, England: Cambridge University Press, 2006), 35–67. First published 1887. I here take note of the fact that any discussion of guilt per se runs the risk of conflating different meanings of the word: guilt as a forensic or objective term, guilt as culpability, is not the same thing as guilt as a subjective or emotional term. It is the difference between being guilty and feeling guilty, a difference that is analytically clear, but often difficult to sustain in discussions of particular instances.
  2. Ibid., 61–62.
  3. Sigmund Freud, Civilization and Its Discontents, trans. James Strachey (New York, NY: Norton, 2005), 137, 140. First published 1930.
  4. Ibid., 140.
  5. William Shakespeare, The Merchant of Venice, Act 4, Scene 1, lines 184–205; see e.g., Stanley Wells, and Gary Taylor, eds., The Oxford Shakespeare: The Complete Works, second edition (Oxford, England: Oxford University Press, 2005), 473.
  6. Luke 23:34 (Revised Standard Version).
  7. Gregg Easterbrook, “Forgiveness is Good for Your Health,” Beliefnet, n.d., http://www.beliefnet.com/wellness/health/2002/03/forgiveness-is-good-for-your-health.aspx. Accessed 5 January 2017.
  8. Pascal Bruckner, The Tyranny of Guilt: An Essay on Western Masochism, trans. Steven Rendall (Princeton, NJ: Princeton University Press, 2010), 1–4.
  9. T.S. Eliot, “Gerontion,” line 34, in The Complete Poems and Plays: 1909–1950 (Orlando, FL: Harcourt Brace Jovanovich, 1971), 22. The poem was first published in 1920.
  10. Daniel Mendelsohn, “Stolen Suffering,” New York Times, March 9, 2008, WK12, http://www.nytimes.com/2008/03/09/opinion/09mendelsohn.html?_r=0.
  11. The book was Misha: A Mémoire of the Holocaust Years (Boston, MA: Mount Ivy Press, 1997), and the author published it under the name Misha Defonseca. According to the Belgian newspaper Le Soir, De Wael was the daughter of parents who had collaborated with the Nazis: see David Mehegan, “Misha and the Wolves,” Off the Shelf (blog), Boston Globe, March 3, 2008, http://www.boston.com/ae/books/blog/2008/03/misha_and_the_w.html.
  12. Binjamin Wilkomirski, Fragments: Memories of a Wartime Childhood (New York, NY: Schocken, 1997); Margaret B. Jones, Love and Consequences: A Memoir of Hope and Survival (New York, NY: Riverhead, 2008).
  13. In a final twist of the case, in May 2014 the Massachusetts Court of Appeals ruled that De Wael had to forfeit the $22.5 million in royalties she had received for Misha. Quotation from Lizzie Dearden, “Misha Defonseca: Author Who Made Up Holocaust Memoir Ordered to Repay £13.3m,” The Independent, May 12, 2014, http://www.independent.co.uk/arts-entertainment/books/news/author-who-made-up-bestselling-holocaust-memoir-ordered-to-repay-133m-9353897.html; additional details from Jeff D. Gorman, “Bizarre Holocaust Lies Support Publisher’s Win,” Courthouse News Service, May 8, 2014, http://www.courthousenews.com/2014/05/08/67710.htm.
  14. Thomas U. Berger, War, Guilt, and World Politics after World War II (New York, NY: Cambridge University Press, 2012), 8.
  15. Elazar Barkan, The Guilt of Nations: Restitution and Negotiating Historical Injustices (Baltimore, MD: Johns Hopkins University Press, 2000), xxxiii.
  16. Herbert Butterfield, The Whig Interpretation of History (New York, NY: Norton, 1965), 45–47.
  17. Andrew Delbanco, The Death of Satan: How Americans Have Lost the Sense of Evil (New York, NY: Farrar, Straus and Giroux, 1995), 9.
  18. Paula Fredriksen, Sin: The Early History of an Idea (Princeton, NJ: Princeton University Press, 2012), 149.
  19. Ibid.
  20. Ibid., 150.

Wilfred M. McClay is G.T. and Libby Blankenship Chair in the History of Liberty and director of the Center for the History of Liberty at the University of Oklahoma.

Reprinted from The Hedgehog Review 19.1 (Spring 2017). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.

In Search Of Ourselves

From yoga retreats to mindfulness meditation, finding ourselves is in vogue. Yet what we expect to find remains a mystery and who finds who is a self evident puzzle. Might there be no one there to find? Is the process of self-discovery a desirable and vital goal? Or just advertising spin?

The Panel

Philosopher and author of The Singularity David Chalmers joins explorer Ed Stafford and award-winning novelist Joanna Kavenna to discover what lies within.

https://iai.tv/video/in-search-of-ourselves

Speakers

  • David Chalmers
    Formulator of the hard problem of consciousness, philosopher of mind David Chalmers is the author of The Singularity
  • Joanna Kavenna
    Winner of the Orange First Novel prize, Kavenna’s works include A Field Guide to Reality, The Ice Museum and Inglorious. Her journalism has appeared in the Lond…
  • David Malone
    David Malone is a director and presenter of BBC and Channel 4 documentaries exploring the history and philosophy of science. His work includes Testing God and h…
  • Ed Stafford
    Ed Stafford, is an English explorer and Guinness World Record holder for being the first human ever to walk the length of the Amazon River.

William Powell, author of counterculture manifesto ‘The Anarchist Cookbook’, dies at 66

An enraged 19-year-old, William Powell holed up in the bowels of the New York City Public Library and pored through every shred of mayhem he could find — declassified military documents, Army field guides, electronic catalogs, insurrectionist pamphlets, survivalist guidebooks.

The material formed the bedrock for “The Anarchist Cookbook,” a crude though clever how-to book for aspiring terrorists, troublemakers and would-be revolutionaries.

Published as the Vietnam War continued to boil and the Summer of Love faded in the distance, the book became a bestseller and an instant manifesto of dissent in America, as ubiquitous in a college dorm room as a Che Guevara poster or a copy of the “Whole Earth Catalog.”

But as the decades passed, Powell came to see the book as a misstep, a vast error in judgment.

Confronted late in life by the makers of the documentary “American Anarchist,” Powell seemed to buckle at the thought that his book had been tied to Columbine, the Oklahoma City bombing, and a litany of other atrocities.

But if there was blood on his hands, he didn’t fully acknowledge it.

“I don’t know the influence the book may have had on the thinking of the perpetrators of these attacks, but I cannot imagine it was positive.”

Long an expatriate, Powell died of a heart attack July 11 during a vacation with his wife, children and grandchildren in Halifax, Canada. His death only became public when it was noted in the closing credits of “American Anarchist,” which premiered Friday. . His death was also disclosed on a Facebook page devoted to Powell’s work as a special education teacher in Africa and Asia. He was 66.

“The Anarchist Cookbook,” which has sold at least 2 million copies — printed, downloaded or otherwise — and remains in publication, was originally a 160-page book that offered a nuts-and-bolts overview of weaponry, sabotage, explosives, booby traps, lethal poisons and drug making. Illustrated with crude drawings, it informed readers how to make TNT and Molotov cocktails, convert shotguns to rocket launchers, destroy bridges, behead someone with piano wire and brew LSD.

The book came with a warning: “Not for children or morons.”

In a foreword, Powell advised that he hadn’t written the book for fringe militant groups of the era like the Weathermen or Minutemen, but for the “silent majority” in America, those he said needed to learn the tools for survival in an uncertain time. Powell himself was worried about being drafted and was an outspoken critic of the Vietnam War and President Nixon.

“This book is for anarchists — those who feel able to discipline themselves — on all subjects (from drugs to weapons to explosives) that are currently illegal or suppressed in this country,” he wrote.

Critics brushed the book off both as “reckless” and “pointless”; the FBI took note but decided any intervention would only stoke further interest in the book. Activists associated with militant groups branded it a transparent attempt to profit off the discord in America.

Powell said he received death threats and retreated to Vermont. He held only one press conference after the book was published, and it had been interrupted when someone hurled stink bombs toward the author.

In more recent publications, the book appears to have grown shorter and readers on Amazon have complained that it has been heavily edited. One reader said he was gravely disappointed to find out that a recipe for napalm had been cut from the book.

Powell eventually found a more conventional life, returning to college, earning a master’s degree in English, becoming a teacher, getting married and raising a family. He also led a nomadic life, teaching special needs children as he roamed the world with his wife and children, traveling from China to Tanzania.

The book itself never made him rich. He conceded years later than the copyright had been held from the start by the book’s original publisher, Lyle Stuart Inc., and that at best he had made $50,000 off the book.

Powell said he became a Christian and found himself increasingly uncomfortable with the book, which had tailed him like a shadow, sometimes standing in the way of a job or testing a friendship. In the late 1970s, he asked the publisher to take “The Anarchist Cookbook” out of publication. His request was rejected.

The author did, though, add a cautionary note to would-be buyers on Amazon, condemning his own book as “a misguided product of my adolescent anger.” He said the book should no longer be in print. He stopped short of urging people not to buy it, though his feelings we clear.

“The central idea to the book was that violence is an acceptable means to bring about political change,” he wrote. “I no longer agree with this.”

In 2013 he wrote a first-person story for The Guardian, again expressing remorse for the book and noting that he had more than atoned for it with decades of teaching and public service in the poorest and least developed countries in the world. He concluded that as a teen, he had accepted the notion that violence could be used to prevent violence.

“I had fallen for the same irrational pattern of thought that led to U.S. military involvement in both Vietnam and Iraq,” he wrote. “The irony is not lost on me.”

On a Facebook remembrance page, filled with condolences and fond memories from students, fellow teachers and family members, there is no obvious mention of the book that made him noteworthy. There is, though, evidence Powell had carved out a far different reputation in the classroom.

If I have made any difference or had any impact on student’s lives since I began teaching overseas it is because Bill was the catalyst,” wrote Kenny Peavy. “He was the first one to take the time to truly see.”

steve.marble@latimes.com

twitter.com/stephenmarble