News

Confessions of a Recovering Environmentalist review – why the human race is heading for the fire

Paul Kingsnorth, a former green activist, thinks the environmental movement has gone wrong. He argues for ‘uncivilisation’

The future for humanity and many other life forms is grim. The crisis gathers force. Melting ice caps, rising seas, vanishing topsoil, felled rainforests, dwindling animal and plant species, a human population forever growing and gobbling and using everything up. What’s to be done? Paul Kingsnorth thinks nothing very much. We have to suck it up. He writes in a typical sentence: “This is bigger than anything there has ever been for as long as humans have existed, and we have done it, and now we are going to have to live through it, if we can.”

Hope finds very little room in this enjoyable, sometimes annoying and mystical collection of essays. Kingsnorth despises the word’s false promise; it comforts us with a lie, when the truth is that we have created an “all-consuming global industrial system” which is “effectively unstoppable; it will run on until it runs out”. To imagine otherwise – to believe that our actions can make the future less dire, even ever so slightly – means that we probably belong to the group of “highly politicised people, whose values and self-image are predicated on being activists”.

According to Kingsnorth, such people find it hard to be honest with themselves. He was once one of them.

We might tell ourselves that The People are ignorant of The Facts and that if we enlighten them they will Act. We might believe that the right treaty has yet to be signed, or the right technology yet to be found, or that the problem is not too much growth and science and progress but too little of it. Or we might choose to believe that a Movement is needed to expose the lies being told to The People by the Bad Men in Power who are preventing The People from doing the rising up they will all want to do when they learn The Truth.

He says this is where “the greens are today”. Environmentalism has become “a consolation prize for a gaggle of washed-up Trots”.

As a characterisation of the green movement, this outbreak of adolescent satire seems unfair. To suggest that its followers become activists only because their “values and self-image” depend on it implies that there is no terror in their hearts, no love of the natural world, nothing real other than their need for a hobby. My experience of green politics is minuscule and secondhand compared with the author’s; all I can say is that the environmentalists I know often share his doubts and yet manage to stick with the cause, believing that their actions may not be totally ineffectual, that something is better than nothing. Most of us would tip our hat to that idea, but Kingsnorth is a passionate apostate with an almost Calvinist certainty that most of the human race, if not all of it, is heading for the fire.

These pieces trace some of his personal and political history. He had a middle-class childhood in the outer London suburbs, with a father who was a “compulsive long-distance walker” – he took his son on marches across the English and Welsh hills. In 1992, aged 19, Kingsnorth joined the protests on Twyford Down against the hill’s destruction by the M3. Aged 21, he was in the rainforests of Indonesia. Like many others, he became an environmentalist “because of a strong emotional reaction to wild places, and the world beyond the human” – like them, he wanted “to save nature from people”. But he also wanted to be different and famous. When he first took it up, green activism “seemed rebellious and excitingly outsiderish”; later, he writes with disappointment, it became “almost de rigueur among the British bourgeoisie”.

Disenchantment arrived when he was in his 30s. In a piece published in 2011, after he has written two or three books as well as columns “for the smart newspapers and the clever magazines”, he decides that his new role model is “not Hemingway but Salinger”. He has done the “big book stuff” – the tours, the extracts run big across the centre pages of mass-market papers. There will be no more Newsnight interviews, no more sitting on the sofa with Richard and Judy (“Jerry Springer was sitting next to me. It was … strange”). All he wants is an acre or two, a house, some bean rows, a pasture, a view of the river. In lists of this kind, renunciation can be hard to distinguish from bragging, and self-sufficiency comes packaged with literary romance.

At the root of this disillusion and retreat – he lives now in a dry-lavatory bungalow in Galway – lay what he calls the “single-minded obsession with climate change” that began to grip environmentalism early in the century. “The fear of carbon has trumped all other issues,” he writes. “Everything else has been stripped away.” Some would see this as saving the planet. Kingsnorth thinks the opposite, that we are destroying the wildest parts of it in the name of sustainability, “a curious, plastic word” that means “sustaining human civilisation at the comfort level that the world’s rich people – us – feel is their right, without destroying the ‘natural capital’ or the ‘resource base’ that is needed to do so”. In more concrete terms, it means wind farms, solar panels and undersea turbines, the renewables that will allow us to carry on business as usual.

Kingsnorth notes that environmentalism is now respectable enough to be embraced by the presidents both of the US (pre-Trump) and Anglo-Dutch Shell, and that a lot of awkward questions have been pushed aside by the drive to reduce carbon. The number of humans, for example, when sustaining a global population of 10 billion, suddenly isn’t a problem, and anyone who suggests otherwise is “giving succour to fascism or racism or gender discrimination”. Instead we make the hills, the deserts and the seas suffer – we’re “industrialising [the] wild places in the name of human desire”.

He writes insightfully about England – presciently, too. “Large-scale immigration is not, as some of its more foaming opponents believe, a conspiracy by metropolitan liberals to destroy English identity,” he says in an essay first published by the Guardian in 2015. “It is a simple commercial calculation. It may cause overcrowding and cultural tension … it is undoubtedly good for growth … if you don’t want the population movement, you don’t get the cheap, easy consumer lifestyle it facilitates. Which will you choose?”

This is Kingsnorth at his plainest and most provocative, but another Kingsnorth is never far away, as romantic in his nationalism as any Victorian storybook when he writes in the same essay: “England is the still pool under the willows where nobody will find you all day, and the only sound is the fish jumping in the dappled light.” This Kingsnorth believes that the human race will eventually die of civilisation, and he wants to create what he calls “Uncivilisation” that will show us a new way to look at human history and endeavour. Stories, he says, are the key.

The book ends with a manifesto: The Eight Principles of Uncivilisation, designed to undermine the myths of progress and human centrality. “Principle 7: we will not lose ourselves in the elaboration of theories or ideologies. Our words will be elemental. We write with dirt under our fingernails.” And so, rather than electric cars and oil in the ground, we are left with a smaller idea of salvation: a little literary movement of the kind that might have gathered around a hand press in a Sussex village c1925, facing the real uncivilisation that has still to come.

Confessions of a Recovering Environmentalist is published by Faber. To order a copy for £11.24 (RRP £14.99) go to bookshop.theguardian.com or call 0330 333 6846. Free UK p&p over £10, online orders only.

In Love with the Monstrous: Why the Hell Am I Attracted to Such Horrible Things?

My otherwise-quite-suitable life-partner and I seem to address this topic quite often, still, even after more than twenty-three years together: Why the hell do I like such horrible things?  (In case you’re wondering, this oft-visited discussion is rarely instigated by me.) I’m not just talking about music, though of course music is one of the many areas where this ‘Mat likes horrible things’ rule is undeniably true.  I’m talking art in general, whether it’s sound or visual art; I’m talking movies, whether it’s disturbing cinema or silly monster movies or films causing severe psychological discomfort; but I’m also talking about actively researching/hunting down and reading about the various assorted true depravities committed by the ever-creative-in-this-department mass of humankind.  Horribleness.  Miscellaneous vileness.  Ugliness of the form and spirit.  I seem drawn to it, and always have been, ever since I can remember.  And, given the extremity of topic/sound/aesthetic surrounding this article, the odds are strong that you too, Heathen Harvester, are just as drawn to the deplorable as I am.  The question I want to investigate here is: why?

Because it’s not all of us that dig this shit.  There are a great many people (as frequently brought up as some kind of evidence by my aforementioned otherwise-quite-suitable life-partner) who don’t like ugliness/horror/depravity at all, and in fact spend a good deal of time deliberately avoiding such matters, choosing to spend their finite hours on this planet enjoying things that are, well, enjoyable.  Instead of, say, looking up uncensored footage of prison stabbings, they’ll read an article on, I don’t know, propagating kale, or look at pictures of animals with amusing expressions, or, I don’t know, something else.  I honestly have no idea.  Because I’m too busy watching grainy footage of people shivving each other in the weights yard.

And I’ve always been this way.  The earliest memory I have of being drawn to the monstrous was as a very young child, watching Doctor Who (I’ve since rummaged through my old stuff and have found a tiny notepad my mum used to keep, which is full of her painstaking re-drawings of my drawings when I was little, and have found a picture of Doctor Who and his companion Sarah Jane which mum dated sixth of August 1978, meaning I was about three and a half).  I seem to recall some green slimy eyeball-type creature shambling up the side of a lighthouse, and I remember loving it soooo much.  (I clearly also remember mum telling dad that my love of the bizarre and frightening was ‘just a phase’, which is pretty damn funny in hindsight.)  But why did I love it so much?  Was it just a love of the impossible, the fantastical?  Or was it just some very normal thing that I never grew out of (I mean, all kids love monsters, don’t they?)—in which case, why didn’t I grow out of it?

Some of us seem drawn to ugly art, strange music, and real-life depravity, and some of us don’t.  I have an inkling that the two are related (being drawn to ugly strangeness in sound/vision, and being interested in ugly strangeness in real life), but of course nothing is ever actually that simple, and I definitely know people who refuse to watch scary/freaky movies but insist on weird/noisy music at all times, so I’m pretty sure whatever conclusions I come up with will be highly variable in their personal mileage, and the whole lumping-this-all-together thing I’m attempting here may very well be a terrible mistake.  But, well, I’m going to attempt it anyway.

So, first stop… monstrousness in fantasy/art/sound/imagination.

LEVEL ONE: THE MONSTROUS AS AESTHETIC

My otherwise-quite-suitable life-partner has a simple rule: no screaming in the lounge room (at least, not with her around).  This doesn’t refer to my own screaming (I am a very quiet chap in general, softly-spoken with a tendency to mumble incoherently), but rather the screaming of the vocalists in the musical projects that I choose to listen to: Nocturno Culto, Utarm, Katherine Katz, Mories, Nekrasov, Jay Randall, Passenger of Shit, J R Hayes, etc. (not to mention the even more ‘non-vocalist’-type screams of bands like Abruptum and Stalaggh/Gulaggh).  She can tolerate the more tuneful-type screams of Devin Townsend, but that’s about her limit: otherwise, any part of our house that isn’t my dimly lit (and expertly soundproofed) underhouse studio is simply a no-scream zone.  Which is fair enough.  After all, human screams are one of the sounds we’re almost biologically attuned to dislike, either through empathy or revulsion.  So why does so much of the music I like contain so damn much of the stuff?   The same goes for immense amounts of atonality, or for overwhelming cut-up chaos (without repetition or pattern or structure): These things are, as a rule, disorienting and/or anxiety-inducing, so why the fuck do I chase it so much?  Why does something in me light up when it gets sonically flummoxed, when the same thing drives other (normal) people away?  And why are you like that too?  What happened to us?  Are we damaged?

I suspect this is roughly how my otherwise-quite-suitable life-partner sees it: that I turned out “wrong” somehow, that I’m a bit “broken”.  But I also suspect that this view is completely inaccurate.  Because I just don’t feel very broken.  I feel fine, generally speaking.  It’s not like I’m drawn to this chaos or darkness or phantasmagorical pain because it’s only then that I feel at home, or because it’s only horror that makes me feel like someone understands my hellish existence, or that it’s the only way I can experience healing catharsis, or anything like that.  It’s not like I need horrible screaming people in my lounge room.  It honestly feels like it’s purely a taste thing, an aesthetic that I’m drawn to.  I just like horrible screaming people, ugly visions, inappropriate textures, and sordidness of spirit.  I just do.  But, of course, this is exactly the issue I’m attempting to investigate here:  the reasons behind this taste, and the reasons why I’m drawn to this particular aesthetic, given that the whole human experience is typically about avoiding the same (shunning the ugly, moving away from the screaming person, not submerging oneself in grossness, etc.).

To help with writing this essay, I’ve just gone and read up on what draws people to horror movies (as an example of the ‘monstrous in art’), and it turns out there’s a million different theories:

1) There’s the theory that watching a scary/freaky movie makes one’s heart rate, blood pressure, and breathing intensify, which kind of experientially heightens the feelings associated with watching it, so, if you’re having a great time watching, it’ll feel like an ever greater time, relatively speaking, because your system is on such high alert.  This theory also lends itself to musical experiences:  If all that atonality and screaming and super-speedy beatmongering (or super-loud doom-vibes) cause your biological system to become heightened, and you’re having a great time listening, then it’ll become an even greater time, relatively speaking.  This theory also ties into the idea of some people totally digging it, and some people totally not digging it, because this theory also says that if you’re having a bad time watching/listening, the bad experience will be made even more unpleasant by the same heightened biological states.  But what this theory doesn’t really help with is why the monstrous thing is enjoyable in the first place.  It only really deals with how the heightened experience makes the reaction stronger than other forms of media.

Still from Taxidermia

2) There’s a theory that it is the heightened excitation itself that we enjoy, in the same way as a base-jumper enjoys leaping off things or a rollercoaster-fan enjoys screaming in abject terror and barfing their guts up (assuming that’s what people enjoy about rollercoasters).  We get off on the feeling of it.  And, even better than a rollercoaster, watching a scary movie, or listening to a disorienting album is an intrinsically safe way to go about getting this hit of heightened excitation.  There may be some merit to this theory; there is definitely some kind of a buzz that I get from these forms of media, and yet I am just as definitely not the kind of person who goes jumping off cliffs (and last time I was on a fairground ride with my daughter I vowed never to fucking do that shit ever again).  But at the same time, I don’t feel like it’s the whole story, because there’s many a time I’ll want to listen to some extreme metal or crazy cut-up nonsense and not feel like I’m ‘chasing a buzz’ at all, but rather, just ‘having a nice time’.

3) There’s a theory that it’s the enjoyment of triumphing over fear/repulsion itself that we enjoy—that, in essence, we enjoy these terrible things because they are unenjoyable, and being able to show them who’s boss is what gives us the positive feelings.  It’s like we’re giving the Grim Reaper the finger, in some sense, like we’re reducing the hideous/terrifying/ghastly/repulsive to mere entertainment, and that is what feels good.  It feels like there might be some merit in this theory too, perhaps:  It is kinda cool to be able to say, ‘You can’t handle Whourkr or Utarm?  I love those bands.’  But this theory does reduce the entirety of enjoying the Art of the Horrendous to some kind of show-offy bullshit pretence, which it really doesn’t feel like, and makes the experience all about proving yourself to others, which it also doesn’t really feel like.  When I listen to full-on strangeness or watch Visitor Q, I tend to do it on my own, without anyone else in mind, and enjoy the experience wholly on my own terms, without anyone else’s validation or respect or values on my mind (and, as mentioned earlier, the enjoyment of such media actually makes my otherwise-quite-suitable life-partner respect me less).  So, although there may be an element of this involved, it doesn’t feel like it’s the whole picture.  (There is, of course, that thing of proving to yourself that you can handle something scary, but, for most of us who are actually into The Strange, that’s a no-brainer: We already know we can handle it because it’s the kind of thing we regularly seek out and experience, so it’s not so much of an issue.  I think there probably is an element of it involved [especially in the escalating scale of the ugliness we may seek out], but it’s definitely not the whole story.)

4) There’s the theory that male-identifying people are drawn to scary movies because they get gender reinforcement by ‘proving themselves’ in the face of fear/repulsion (‘lack of fear in the face of terror’ being a cultural marker for assessing masculinity).  One study (a ridiculously small one, with only thirty-six people of very limited diversity) showed that male-identifying people enjoyed a horror movie more when they watched it with a female-identifying person who was scared, and female-identifying people enjoyed the horror movie more when they watched it with a male-identifying person who wasn’t scared.  I suspect this is actually rank codswallop, given the male—and female—identifying people I personally know, but could very well be a factor for a more mainstream population. What would I know?  Either way, it doesn’t really answer my particular question though, and is much harder to shift across to other art forms—if these effects are remotely true in the first place, does listening to strange/ugly music produce similar effects (i.e., do chicks like listening to hideous noise if there’s a manly man around)?  Certainly my otherwise-quite-suitable life-partner has never once found my ever-so-masculine tolerance of unpleasant music/cinema even remotely erogenous.  As suggested earlier: rank codswallop.

5) There’s a theory that says we are drawn to horror/strangeness/ugliness because it is outside of our normal realm of experience, and, as such, becomes imbued with the Imaginary Value of the Rare.  In the same way as people care more about cheetahs than they do about pigs, or diamonds more than they care about bread, the very novelty of the horrendous makes it worth something.  Biologically, we are hardwired to look for anomalies in our environment, and curiosity about The Strange is a sensible survival technique.  It may very well be that we are drawn to horror movies/weird music/ghastly stories for the very same reasons we rubberneck at a car crash.  A normal person’s morbid fascination and my unending hunt for intriguing new sounds are basically rooted in the same biological thing.

Still from Martyrs

Now, when I saw this theory, it made a small ‘ching’ noise in my mental theatre, like a little gold bell struck once with a tiny hammer, because this is actually something I am consciously aware of in my search for interesting music/art/cinema.  I love nothing more than hearing some piece of music and thinking, ‘Fuck, I have never heard that before’.  When I make music, it’s always with the intention of adding something to the world that doesn’t already exist.  When I review an album, I’m always asking myself, ‘Is this just a pile of self-conscious cookie-cutter swill, or is this actually something worthwhile?’  So, seeing this concept of novelty applied to horror movies was actually a bit of an eye-opener.  I’d never thought of it that way before.  My interest in the dark/ugly/strange side of media is all linked by a conceptual interest in the far borders of human experience—in experiencing the very fringes of the normal/socially permissible.  I don’t want to jump off a cliff, but I’m deeply drawn to music/visuals/emotions that do (metaphorically speaking).  It’s not actually an attraction to the repulsive, it’s an attraction to the strange, and, by its very nature, the strange includes all those things that don’t fit into the normal.  And, since the normal spends so much time appreciating/collecting beauty and pleasantry and comfort, the strange ends up including the ugly and unpleasant and discomforting!  I’m not broken after all!  I just like weirdness, which happens to include ugliness and horror!  It may be that the part of me that lit up when I first heard Alvin and the Chipmunks is the exact same part of me that got a buzz out of Martyrs.

It does all make sense, that all this interest in The Repulsive stems from a blanket interest in The Strange.  Most of the other people I know who share this obsession with the macabre/ugly have similar interests in Surrealism, the Occult, dreams, etc.  Being raised in a slick corporate world of ego-driven fitness, photoshopped beauty, and community as PR, it’s no surprise that some of us were drawn to the things we weren’t meant to see, and sided with ugliness instead.  Like the underarm hair on a fashion model, there are many things that are true and real and natural that our society attempts to erase in the name of capitalist fear-mongering and mind control, and it is no surprise that some of us opted for the forbidden (sometimes for no other reason than it was forbidden in the first place).

Blood Dumpling Envy by Chris Mars

Still with me?  Great!  This paragraph or so of ‘rubbernecking at car crashes’ seems the perfect segue to take us to the next, quite a bit more disturbing, level of this journey of the horrendous: our interest in true horror (because it’s not just fantasy stuff we’re into).  The kind of person I’m talking about here (okay, so basically me at this stage, but I’m hoping there are enough of you out there to justify the effort involved in writing and publishing this essay), this kind of person doesn’t just watch Taxidermia and listen to Gnaw Their Tongues and enjoy the painted works of Chris Mars (and bonus points to any of you who ticked off all three boxes there).  It’s not just in the phantasmagorical realm that we’re drawn to ghastliness, but in the real.  The kind of person I’m talking about also reads true crime stories (the more aberrant the better) and searches out photos of things made of human skin.  This kind of person finds themselves late at night perusing the sickening online transcripts of the instructional cassette tape David Parker Ray (AKA the Toybox Killer) recorded for his bound and gagged kidnapping victims to listen to as they awoke on his torture table.  Because (I think) part of this interest in the great horror is not merely titillation or car-crash rubbernecking, but in unlocking something about what it means to be human—where the lines of experience are drawn, and what’s at the very edges of that terrain.  So, level two:  Hold on tight.

LEVEL TWO: THE MONSTROUS AS REALITY

Now, before we get to this level, let’s make it clear:  Horror is still horror to me.  It’s not fun.  The ugly is still ugly.  It’s not like I’m here going, ‘It’s so cool when people get hurt or have bad times’.  It’s not like that at all.  It’s something like eating a really hot chili:  It still hurts, lots, but there is some kind of intensity to the pain itself that can be enjoyed, while the burning is still really not enjoyable at all.  You can enjoy the intensity itself while still registering the pain as painful.  There’s an excitement to the extremity of the badness while still fully recognising the badness is bad.  Like the car crash we drive past, craning for corpses:  We know those corpses are real people, like everyone we love, and that those corpses represent a whole world of sadness and pain for other very real people, but at the same time, it’d be kinda cool to cop an eyeful.

So, drawing on all the theories above, do they still apply when the horror is not some kind of aesthetic choice, but a real-life tragedy?  Is it okay to get a buzz out of genuine misfortune?  Is it okay to be interested in the very darkest parts of the human organism?  Hasn’t it crossed some line now into sickness and depravity?  I argue it hasn’t, as long as we keep that previous point in mind: that bad shit is actually really fucking bad.  My interest in the true horrors of the world is actually miles away from ‘fun’.  It has elements of ‘attraction to novelty’ about it, it has elements of ‘triumphing over fear’, but it is never, ever ‘having a cool time’.  It’s definitely an interest in the aberrant while being fucking endlessly gratitudinously thankful that it is an aberration and not the norm.  It’s a much more serious business than listening to some wacky music or watching a bunch of actors pretend to be scared:  This is intrinsically linked to that stuff about experiencing the very borders of human experience and knowing what’s really going on.  It’s pretty fucked, but I feel better knowing just how fucked it actually is.

Collected Atrocities 2005-2008 by Gnaw Their Tongues

And sometimes it really does leave me scarred—sometimes permanently so.  That late night when I discovered myself reading what David Parker Ray had to say to his victims, I felt physically ill.  I was shaking with the horror of it all—that this shit was fucking real, this actually happened to people as flesh-and-blood as I am, as my daughter is.  I actually felt like I was having a panic attack.  It was not fun.  And yet I read it to the end and went hunting for more information, pictures, and testimonies in some kind of horrified fact-hunting fugue.

I had a similar reaction when reading about one researcher’s infiltration of the child pornography community on the Deep Web.  What I read there fucking completely freaked me out for a long time (families raising kids specifically for ‘sharing’; the schism between the anti-violence and pro-violence factions; the mind-boggling scale of it all).  But that didn’t stop me poking around the dark corners of reality, because, well, just because something is mind-boggling horrible doesn’t mean I should put my fingers in my ears and go ‘la la la’ in the hopes that it will go away.  It won’t.

When it comes to fictional depravity, I think the simple notions of ‘novelty’ and ‘triumph over horror’ might come into play, but when it comes to this far-scarier, far-more-awful real life horror, I think another element comes to the fore, namely knowing what’s really going on.  I like to think it’s the attraction of knowledge, pure unrefined warty-balls-and-all knowledge itself, that draws me in.  (But of course, I’m not scouring astrophysicist sites for knowledge; I’m not trawling marine biology sites for knowledge; it’s simply not the case that it’s ‘just knowledge’ that interests me.  It’s very definitely ‘knowledge about things that are horrible’ that attracts me.  So, what is it about that knowledge regarding specifically horrendous, fucking ghastly shit that interests me?  Is it the ‘triumphing over fear’ stuff investigated above?  Is it the ‘fringes of experience’ stuff?)

I think, in the end, it’s some kind of a desperate attempt to understand what we’re capable of—what I, as a human being, must be capable of.  When I talk about an interest in exploring ‘the fringes of human experience’, I wonder if, deep down, it’s actually about exploring what I could be capable of—what you could be capable of.  It’s about what any of us could be capable of.  Because we’re all the same species, exactly the same species, as David Parker Ray or Jeffrey Dahmer or Elizabeth Bathory.  Anything they could do (I’m not talking about feats of strength or remarkable agility here), I could do, or you could do.  And yet, somehow, through some amazing conjunction of circumstances, we don’t do these terrible, fucked up things.  And that feels great.

When we know just how horrible things can be, it gives us two things:

1) We are armed with the shining scimitar of actual truth, and

2) We are filled with the glowing light of gratitude that whatever foul fucking piece of disaster we’ve just finished consuming is not, in fact, happening to us right now.

And truth and gratitude, I think, may be worth more than a little horror.

SOME KIND OF GLIB POINT-PROVING SUMMARY

In closing, what have I learned?  I think the most important thing here is that an interest in the strange is not necessarily a problem or some kind of symptom of a broken person, or something that we should be concerned about in our young ones, or anything like that.  An interest in the strange can definitely bring people into contact with horrible, horrible things and can definitely make the soundtrack of your lounge room less comfortable for your significant others, but it can also bring a lot of truth into your lives.  Unpleasant, awful, trauma-inspiring truth, but truth nonetheless.  As a vegan-type person, I’ve definitely seen a lot more trauma-inspiring footage than most mainstreamer corpse-eating-type people, but I can’t help but feel that if I have to choose between comfortable illusion and uncomfortable truth, I’ll always end up choosing to know the ugly facts.  It’s a bit like that.

In the end, I’m not actually saying, ‘I listen to weird music, which is somehow loosely tied into valuing truth more than people who listen to mainstream music, so I’m a better person than you’.  I’m not actually saying, ‘People who only listen to carefully sanitised, executively driven, corporately produced music are somehow trapped in an inauthentic world of capitalist product-driven illusion, and I’m not, so nyer’.  I’m not really saying, ‘Weirdness is better, straight people suck massive dogballs’.  Or am I?

Maybe, deep down, I am saying that.  And maybe this is really just me petulantly getting back at everyone who ever called me a weirdo.  How can I possibly tell?  Funny how the subconscious works.

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network.

Adam Ferriss

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

Adam Ferriss

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

This early artificial neural network, at the Cornell Aeronautical Laboratory in Buffalo, New York, circa 1960, processed inputs from light sensors.
Ferriss was inspired to run Cornell’s artificial neural network through Deep Dream, producing the images above and below.

Adam Ferriss

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

Adam Ferriss

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

Serial Killers With Greg Polcyn & Vanessa Richardson

 

About Serial Killers

Every Monday, Serial Killers takes a psychological and entertaining approach to provide a rare glimpse into the mind, methods and madness of the most notorious serial killers with the hopes of better understanding their psychological profile. With the help of real recordings and voice actors, we delve deep into their lives and stories.

LINKhttps://www.parcast.com/serial/

Choosing the Proper Tool for the Task

Assessing Your Encryption Options

So, you’ve decided to encrypt your communications. Great! But which tools are the best? There are several options available, and your comrade’s favorite may not be the best for you. Each option has pros and cons, some of which may be deal breakers—or selling points!—for you or your intended recipient. How, then, do you decide which tools and services will make sure your secrets stay between you and the person you’re sharing them with, at least while they’re in transit?

Keep in mind that you don’t necessarily need the same tool for every situation; you can choose the right one for each circumstance. There are many variables that could affect what constitutes the “correct” tool for each situation, and this guide can’t possibly cover all of them. But knowing a little more about what options are available, and how they work, will help you make better-informed decisions.

Crypto options 1

Signal

Pros: Signal is free, open source, easy to use, and features a desktop app, password protection for Android, secure group messages. It’s also maintained by a politically-conscious nonprofit organization, and offers: original implementation of an encryption protocol used by several other tools,1ephemeral (disappearing) messages, control over notification content, sent/read receipts—plus it can encrypt calls and offers a call-and-response two-word authentication phrase so you can verify your call isn’t being tampered with.

Cons: Signal offers no password protection for iPhone, and being maintained by a small team means fixes are sometimes on a slow timeline. Your Signal user ID is your phone number, you may have to talk your friends into using the app, and it sometimes suffers from spotty message delivery.

Signal certainly has its problems, but using it won’t make you LESS secure. It’s worth noting that sometimes Signal messages never reach their endpoint. This glitch has become increasingly rare, but Signal may still not be the best tool for interpersonal relationship communications when emotions are heightened!2 One of Signal’s primary problems is failure to recognize when a message’s recipient is no longer using Signal. This can result in misunderstandings ranging from hilarious to relationship-ending. Additionally, Signal for Desktop is a Chrome plugin; for some, this is a selling point, for others, a deal breaker. Signal for Mac doesn’t offer encryption at rest,3 which means unless you’ve turned it on as a default for your computer, your stored saved data isn’t encrypted. It’s also important to know that while Signal does offer self-destructing messages, the timer is shared, meaning that your contact can shut off the timer entirely and the messages YOU send will cease to disappear.

Wickr

Pros: Wickr offers free, ephemeral messaging that is password protected. Your user ID is not dependent on your phone number or other personally identifying info. Wickr is mostly reliable and easy to use—it just works.

Cons: Wickr is not open source, and the company’s profit model (motive) is unclear. There’s also no way to turn off disappearing messages.

Wickr is sometimes called “Snapchat for adults.” It’s an ephemeral messaging app which claims to encrypt your photos and messages from endpoint to endpoint, and stores everything behind a password. It probably actually does exactly what it says it does, and is regularly audited, but Wickr’s primary selling point is that your user login is independent from your cell phone number. You can log in from any device, including a disposable phone, and still have access to your Wickr contacts, making communication fairly easy. The primary concern with using Wickr is that it’s a free app, and we don’t really know what those who maintain it gain from doing so, and it should absolutely be used with that in mind. Additionally, it is worth keeping in mind that Wickr is suboptimal for communications you actually need to keep, as there is no option to turn off ephemeral messaging, and the timer only goes up to six days.

Threema

Pros: Threema is PIN-protected, offers decent usability, allows file transfers, and your user ID is not tied to your phone number.

Cons: Threema isn’t free, isn’t open source, doesn’t allow ephemeral messaging, and ONLY allows a 4-digit PIN.

Threema’s primary selling point is that it’s used by some knowledgeable people. Like Wickr, Threema is not open source but is regularly audited, and likely does exactly what it promises to do. Also like Wickr, the fact that your user ID is not tied to your phone number is a massive privacy benefit. If lack of ephemerality isn’t a problem for you (or if Wickr’s ephemerality IS a problem for you), Threema pretty much just works. It’s not free, but at $2.99 for download, it’s not exactly prohibitively expensive for most users. With a little effort, Threema also makes it possible for Android users to pay for their app “anonymously” (using either Bitcoin or Visa gift cards) and directly download it, rather than forcing people to go through the Google Play Store.

WhatsApp

Pros: Everyone uses it, it uses Signal’s encryption protocol, it’s super straightforward to use, it has a desktop app, and it also encrypts calls.

Cons: Owned by Facebook, WhatsApp is not open source, has no password protection and no ephemeral messaging option, is a bit of a forensic nightmare, and its key change notifications are opt-in rather than default.

The primary use case for WhatsApp is to keep the content of your communications with your cousin who doesn’t care about security out of the NSA’s dragnet. The encryption WhatsApp uses is good, but it’s otherwise a pretty unremarkable app with regards to security features. It’s extremely easy to use, is widely used by people who don’t even care about privacy, and it actually provides a little cover due to that fact.

The biggest problem with WhatsApp appears to be that it doesn’t necessarily delete data, but rather deletes only the record of that data, making forensic recovery of your conversations possible if your device is taken from you. That said, as long as you remain in control of your device, WhatsApp can be an excellent way to keep your communications private while not using obvious “security tools.”

Finally, while rumors of a “WhatsApp backdoor” have been greatly exaggerated, if WhatsApp DOES seem like the correct option for you, it is definitely a best practice to enable the feature which notifies you when a contact’s key has changed.

Facebook Secret Messages

Pros: This app is widely used, relies on Signal’s encryption protocol, offers ephemeral messaging, and is mostly easy to use.

Cons: You need to have a Facebook account to use it, it has no desktop availability, it’s kind of hard to figure out how to start a conversation, there’s no password protection, and your username is your “Real Name” as defined by Facebook standards.

Facebook finally rolled out “Secret Messages” for the Facebook Messenger app. While the Secret Messages are actually pretty easy to use once you’ve gotten them started, starting a Secret Message can be a pain in the ass. The process is not terribly intuitive, and people may forget to do it entirely as it’s not Facebook Messenger’s default status. Like WhatsApp, there’s no password protection option, but Facebook Secret Messages does offer the option for ephemerality. Facebook Secret Messages also shares the whole “not really a security tool” thing with WhatsApp, meaning that it’s fairly innocuous and can fly under the radar if you’re living somewhere people are being targeted for using secure communication tools.

There are certainly other tools out there in addition to those discussed above, and use of nearly any encryption is preferable to sending plaintext messages. The most important things you can do are choose a solution (or series of solutions) which works well for you and your contacts, and employ good security practices in addition to using encrypted communications.

There is no one correct way to do security. Even flawed security is better than none at all, so long as you have a working understanding of what those flaws are and how they can hurt you.

— By Elle Armageddon

Burner Phone Best Practices

A User’s Guide

A burner phone is a single-use phone, unattached to your identity, which can theoretically be used to communicate anonymously in situations where communications may be monitored. Whether or not using a burner phone is itself a “best practice” is up for debate, but if you’ve made the choice to use one, there are several things you should keep in mind.

Burner phones are not the same as disposable phones.

A burner phone is, as mentioned above, a single-use phone procured specifically for anonymous communications. It is considered a means of clandestine communication, and its efficacy is predicated on having flawless security practices. A disposable phone is one you purchase and use normally with the understanding that it may be lost or broken.

Burner phones should only ever talk to other burner phones.

Using a burner phone to talk to someone’s everyday phone leaves a trail between you and your contact. For the safety of everyone within your communication circle, burner phones should only be used to contact other burner phones, so your relationships will not compromise your security. There are a number of ways to arrange this, but the best is probably to memorize your own number and share it in person with whoever you’re hoping to communicate with. Agree in advance on an innocuous text they will send you, so that when you power your phone on you can identify them based on the message they’ve sent and nothing else. In situations where you are meeting people in a large crowd, it is probably OK to complete this process with your phone turned on, as well. In either case, it is unnecessary to reply to the initiation message unless you have important information to impart. Remember too that you should keep your contacts and your communications as sparse as possible, in order to minimize potential risks to your security.

Never turn your burner on at home.

Since cell phones both log and transmit location data, you should never turn on a burner phone somewhere you can be linked to. This obviously covers your home, but should also extend to your place of work, your school, your gym, and anywhere else you frequently visit.

Never turn your burner on in proximity to your main phone.

As explained above, phones are basically tracking devices with additional cool functions and features. Because of this, you should never turn on a burner in proximity to your “real” phone. Having a data trail placing your ostensibly anonymous burner in the same place at the same time as your personally-identifying phone is an excellent way to get identified. This also means that unless you’re in a large crowd, you shouldn’t power your burner phone on in proximity to your contacts’ powered-up burners.

Given that the purpose of using a burner phone is to preserve your anonymity and the anonymity and the people around you, identifying yourself or your contacts by name undermines that goal. Don’t use anyone’s legal name when communicating via burner, and don’t use pseudonyms that you have used elsewhere either. If you must use identifiers, they should be unique, established in advance, and not reused.

Consider using an innocuous passphrase to communicate, rather than using names at all. Think “hey, do you want to get brunch Tuesday?” rather than “hey, this is Secret Squirrel.” This also allows for call-and-response as authentication. For example, you’ll know the contact you’re intending to reach is the correct contact if they respond to your brunch invitation with, “sure, let me check my calendar and get back to you.” Additionally, this authentication practice allows for the use of a duress code, “I can’t make it to brunch, I’ve got a yoga class conflict,” which can be used if the person you’re trying to coordinate with has run into trouble.

Beware of IMSI catchers.

One reason you want to keep your authentication and duress phrases as innocuous as possible is because law enforcement agencies around the world are increasingly using IMSI catchers, also known as “Stingrays” or “Cell Site Simulators” to capture text messages and phone calls within their range. These devices pretend to be cell towers, intercept and log your communications, and then pass them on to real cell towers so your intended contacts also receive them. Because of this, you probably don’t want to use your burner to text things like, “Hey are you at the protest?” or “Yo, did you bring the Molotovs?”

Under normal circumstances, the use of encrypted messengers such as Signal can circumvent the use of Stingrays fairly effectively, but as burner phones do not typically have the capability for encrypted messaging (unless you’re buying burner smartphones), it is necessary to be careful about what you’re saying.

Burner phones are single-use.

Burner phones are meant to be used once, and then considered “burned.” There are a lot of reasons for this, but the primary reason is that you don’t want your clandestine actions linked. If the same “burner” phone starts showing up at the same events, people investigating those events have a broader set of data to build profiles from. What this means is, if what you’re doing really does require a burner phone, then what you’re doing requires a fresh, clean burner every single time. Don’t let sloppy execution of security measures negate all your efforts.

Procure your burner phone carefully.

You want your burner to be untraceable. That means you should pay for it in cash; don’t use your debit card. Ask yourself: are there surveillance cameras in or around the place you are buying it? Don’t bring your personal phone to the location where you buy your burner. Consider walking or biking to the place you’re purchasing your burner; covering easily-identifiable features with clothing or makeup; and not purchasing a burner at a location you frequent regularly enough that the staff recognize you.

Never assume burner phones are “safe” or “secure.”

For burner phones to preserve your privacy, everyone involved in the communication circle has to maintain good security culture. Safe use of burners demands proper precautions and good hygiene from everyone in the network: a failure by one person can compromise everyone. Consequently, it is important both to make sure everyone you’re communicating with is on the same page regarding the safe and proper use of burner phones, and also to assume that someone is likely to be careless. This is another good reason to be careful with your communications even while using burner phones. Always take responsibility for your own safety, and don’t hesitate to erase and ditch your burner when necessary.

— By Elle Armageddon

Why I Choose to Live in Wayne National Forest

TO THE POINT

Our current system is like an abandoned parking lot. Asphalt was laid, killing life and turning everything into a homogenous blackness, a dead sameness. The levers of maintaining this have broken down. No one is coming to touch up the asphalt. In abandoned parking lots, cracks form and life grows from the cracks.

All these riots, environmental catastrophes, food crises, occupations of land by protestors, and various breakdowns in daily life are cracks in the asphalt. What will spring from the cracks depends on what seed is planted within them. Beautiful flowers could grow. Weeds could grow.

Modern rich nations have walled themselves in. Colonized India was a world apart from Britain. The United States exists an ocean away from the places it drone strikes. Citizenship acts as a tool of ethnic cleansing. The world, according to the new nationalists, will be a checkerboard of racially homogenous governments with swords continuously drawn. The rich nations will now literally wall themselves in, ensure their “racial purity”, and steal from the poorer nations until the end of days. At least, this is the future envisioned by the Trump/Bannon regime. This is the future governments everywhere seem to be carrying us toward, a divided people screaming in joy or anger.

The continued and sped up process of fracking Wayne National Forest, Ohio’s only national forest, fits perfectly into this worldview, or a governance in managing the cracks. The power of this world and the world our rulers wish to realize is dependent on fracking wells, oil rigs, pipelines, and energy infrastructure in general. To oppose this infrastructure is to oppose this system, to take as our starting point the cracks.

I am living in Wayne National Forest in hopes for, first and foremost, protecting the forest. I hope to crack the asphalt and plant a flower.

Everyone is welcome to join the occupation, beginning on May 12th. Everyone is welcome to visit. Everyone is welcome to participate, in one way or another, in this land defense project.

EXTENDED

Some conclude the election of Trump signals and end of the left. Though the opinion seems rushed, and forces could push for a revitalization, if true, then good riddance.

Those of the left are preoccupied with flaunting ego. Taking up their various labels- communist, socialist, anarchist, Trotskyist- seems more about themselves than any revolutionary project. The labelling urge is bureaucratic. Leftists have done themselves no favors talking like politicians. Their endless meetings bear all the marks of officialdom and red tape. Distant from daily life, they alienate those who truly seek a new world. Most meetings, not much more is accomplished than an agreement to continue having meetings. This is a hallmark of bureaucracy.

Rally after rally appears the same dead tactics and strategies. Standing on the sidewalk, holding signs, and chanting slogans at buildings will never bring change. These events only pose a threat when a variety of activity occurs, when people stop listening to the activists. This could be anything from smashing up cop cars to a group of musicians playing spur of the moment.

Supervisors hate the unplanned.

If change is sought than an understanding of the ruling structure is vital. Understanding the current arrangement takes a grasping of history. History reveals how the present came to be and such recognition provides the basis for comprehending our current world.

The first known civilization sprang up in modern day Iraq around 6,000 years ago. This did not occur because humans became smarter or more physically fit. Modern humans evolved physically around 100,000 years ago and mentally 40,000 years ago. The five main qualities of civilization are: 1. City life 2. Specialized labor 3. Control of resources above what is needed to survive by a small group, leading to 4. Class rank and 5. Government. This is still the order we face today.

Before civilizations ascendency humans organized life in various ways. One was the hunter-gatherer band. These were groups of 100 or less, usually with no formal leadership and no difference in wealth and status. These groups were mobile, never staying in one spot more than temporarily. Again, it was not due to stupidity that these people did not develop more civilized ways of living. One could argue the hunter-gatherer life promotes a general knowledge while modern society encourages a narrow, yet dense, knowledge.

Agriculture and animal domestication led to farming villages and settled life. With this came the “Trap of Sedentism.” After a few generations of village life people forgot the skills needed to live nomadically and became dependent upon the village. In general, people worked harder and longer to survive while close quarters with each other and animals increased illness. With greater access to food, the population increased.

Chiefdoms were another form of pre-civilized living. These ranked societies had various clans placed differently on the pecking order and everyone governed by a chief. The chief controlled whatever food was produced above what the village needed to survive. The chief controlled the surplus. These societies came the closest to civilized living patterns.

Agriculture’s surplus allowed more people to feast than in the hunter-gatherer band. With more people working the fields and tinkering with technology came innovation and with innovation a larger surplus. This larger surplus allowed for continued population growth. This cycle proceeded to the birth of civilization and became more rapid with its birth.

Economists have advertised the story of “barter” for a very long time, perhaps because it is so vital to their domain of study. The narrative is as follows: John owns 3 pairs of boots but needs an axe and Jane has 2 axes but needs a pair of boots. The two trade with each other to get what they want and each is trying to get the upper hand in the trade. The massive problem with this story is it is false.

Adam Smith, an economist from the late 1700s, popularized this tale and made it the basis of economics. He asserted one would find barter where money did not exist, in all cases, and pointed to aboriginal Americans as an example. When Europeans came to conquer the continent, they did not find a land of barter where money was nonexistent.

Barter took place between strangers and enemies. Within the village, one found different forms of distribution. One place may have a central hub where people add to and take from. Another place may have free gifts between them. To redo our John and Jane example, John takes Jane’s axe and Jane knows that when she needs something of John’s he will let her have free use of it. What we never find happening is barter.

This is important because the barter folktale convinces people our present system is a reasonable development. If humanity’s natural propensity is to barter, then money and profitable exchange seem like evident progression. This is not to say that barter is “unnatural”, as it came from the heads and relationships of people, but that it is not the only game in town. If it is not the only game in town, and there are a multitude of ways humanity could and has organized itself, then the current system can’t be justified as the necessary development of human nature.

So, for most of human history impersonal government power did not exist. Communities were self-sufficient and relationships were equal and local. The rise of civilization and government changed this. Dependency and inequality marked associations and the few held power over the many.

Surplus food put some in a position where they did not need to work for their survival. While most still obtained resources from the earth and survived on their labor, a few extracted supplies from the many. This small group became the wealthy ruling class and controlled the allocation of production excess. The basic relationship here is parasitic.

The smart parasite practices restrained predation, meaning it doesn’t use up all the energy of the host to maintain the host’s life and continue its own survival. The smartest parasite defends its host. Rulers learned to protect the workers for this reason and in the process increased these laborers’ dependence on them. Increasing population developed into cities and problems of coordination occurred with more people living in a single space. The ruling parasites organized social life to maintain their control of the surplus and, at the same time, rationalize the city to solve problems of communication and coordination.

State power emerged from large scale infrastructural projects as well, specifically irrigation. Irrigation is a way of diverting water from the source to fields. Large scale irrigation endeavors required thousands of people and careful utilization of raw materials. Undertaking such a plan required a small group with the technical know-how to control what labor was done, how and when it was done, how much material was needed, when and how it was used, and utilize these same networks of influence for future repairs. Large infrastructure and complex city life increased the dependency of producers on rulers.

The city is the basis of civilization. The city, simply defined, is land where too many people exist for it to be self-sufficient. It requires continuous resource importation to keep the large population alive, one that cannot live off the soil. This impersonal power, who’s structures don’t change, is based on mindless expansion outside of the city in search of resources. War, of course, is the most efficient way to grab these resources when one city’s importation search runs head on with another’s or with people who live in the way of what is sought. Conquering existed before civilization yet become perfected within its system.

Emperors emerged to rule the masses, gaining prestige from war prowess. Forming empires, these leaders ushered in a new form of rule through large territories gained in conquest. For peasants that controlled their land and were not controlled by feudal lords, they only came into contact with government once a year. Politics was centralized in the palace. Ruling families may change but this did not affect peasant lives. Without modern surveillance technologies and police institutions it was virtually impossible to continuously govern every piece of land. Peasants organized their villages on their own. The only time they saw their government was when the army was sent to collect taxes. This all changed with the rise of the nation-state and mass politics.

Feudalism was based on loyalty to the King and land distribution by the King to obedient lords. Lords, in turn, granted parts of their land to vassals under similar conditions of obedience. Governing authority was decentralized. The King was the ultimate feudal lord, but could only flex on those lords who held land from him. The entire system depended on the lord’s willingness to obey or the ability of the king to rally enough troops to crush the disobedient. This system was basically moneyless, relying on rents or food and other goods flowing up the feudal pyramid. This changed with increased commercial activity.

Buying and selling began to replace rents, with power beginning to shift to merchants and urban commercial activity in general. This change brought about the ability for Kings to monetarily tax those within their domain. Centralization was required to do this and it undermined feudal relations, that of the lord controlling his own land. Any further development of commercial activity would strengthen the monarchy over the nobility.

Changes in warfare required taxes and the creation of a permanent army. Before, Kings would call upon their lords who would rally their vassals to the King’s will. Feudal armies were small, unreliable, and war was local. With Kings increasing their revenue, they were able to hire foreign mercenaries and pay a small permanent army. If other Kings did not want to be conquered, they conformed or died. With a permanent army came a need to increase taxation for maintenance, further undermining feudalism.

Kingly taxation of the populace established a direct link between the highest governing authority and the lowest on the power chain. This completely undermined the rule of lords and centralized power into national monarchies. The primary concern of these nations was that people consented to taxes.

Another way the nation-state emerged was through city-state infighting. Dictators would rise within the city to calm civil war taking place between the rich and poor. These dictators would conquer more land and become princes. When these carved out territories fell apart, cities and other units would try to conquer each other to fill the power vacuum. Eventually, consolidation would happen and usually with the help of mercenaries. Since mercenaries held it all together, whoever controlled the national treasury had power.

When vast empires fell apart, specifically in the Middle East, there arose smaller governing units. These smaller units were concerned with conquering and so had to develop militaries. To do this they taxed the population and could only do so if the people consented, meaning they had to provide services and other incentives. Politics moved out of the palace to everywhere. The nation-state gave birth to mass-politics.

The nation-state is totalitarian by nature. It must care about what its population is doing. Government presence went from one year at tax time to being a constant. Laws upon laws developed, strictly regulating the life of the people in the borders of the nation. The daily life of the people was now bound together with the health and viability of the system. Here, we find the international system of nation-states and world market.

Peasants no longer grew food, ate it, and had a surplus. Now, they sold their food on the market, which the nation-state taxed, in exchange for money and used this money to buy food and pay taxes. Urban centers made goods for a taxable wage and the goods they made could be taxed. Imported goods from other nations could be taxed as well. Truly, all of daily life was absorbed into the system. People’s continued consent and work within new market parameters called forth the totalitarian nature of the nation-state.

Economic development led to restructuring. Small craftsmen went out of business when factory production was able to make, and therefore sell, goods cheaper and faster. These craftsmen found themselves doing unskilled and semiskilled labor on the factory floor. Before, production was individual. Those that produced a good also owned the shop and tools so it made sense that they should get all the money earned. Factory production saw creation become social, with many helping to make the goods, while payment stayed individual, with factory owners who contributed no labor gaining all the profit for simply owning the tools and the building. This is the same parasitic relationship found throughout all of civilization, just new roles and new ways for the ruling class to live off of the labor of many.

The workers movement developed in response to this, made up of various left ideologies; from Marian communism to anarchism. Regardless of ideological preference, the idea was the same. The factory was the kernel of the new world. People had been separated from the land and each other through borders, style of work, race, and a number of other things. The factory got all these different types of people together and under the same conditions. The more the factory spread, the more people were united by their similar exploitation. Eventually, they would rise up and usher in a new world based on freedom and equality.

There were problems with this. People were united in their separation. It took the imposition of an ethic by the workers movement, that all these different types of people should identify first and foremost as workers, for collective action to take place. All the workers did not have similar interests. Young white single males have much different concerns than a single black immigrant mother, regardless of being in the same factory. Obviously, government leaders and factory owners utilized these differences to their own advantage by privileging some groups over others. The slogan “An Injury to One is an Injury to All” was based more on faith than fact.

The workers movement saw the factory’s mass employment with hope as well. With massive profits, owners would reinvest this money in machines and other tools. Needing people to work the new equipment, they hired. Selling more products, created more efficiently, led to more profits and the cycle continued. As the factory system expanded, it was believed capitalism was bringing its own collapse. More were being united by a common state, that of the worker, and eventually their false separations would subside. They would see each other as the same, regardless of creed or color, see their true enemy in the factory owners and their government, and revolt.

For this reason, the workers movement advocated the expansion of the factory in a policy called “proletarianization.” When the Bolshevik Communists came to power in Russia, their main concern was to industrialize the nation for this purpose, similar to the rise of Communist governments elsewhere. One could ask the obvious question: Would spreading the factory system and the working-class condition really bring its end? Would spreading the plantation system and the slave condition end slavery, or strengthen it?

If Trump is the end of the left, good riddance.

The conditions that brought about the original workers movement have changed, yet the left seems blind to this or prepares mental gymnastics. For starters, the current economy is deindustrializing in America and post-industrial worldwide. Even in current industrial powerhouses like China and India, employment rates and growth from a former period are not found. For the United States, Europe, and the West in general, there is no real industrial manufacturing base. This type of work only happens in the colonized world or prison. It’s only sweatshops of various types in different spaces.

In fact, it may even be fallacious to speak of a “colonized” world. The nation-state seems not to matter anymore. A new, global system has developed. Transnational corporations organize social life, almost everywhere, to operate for the creation of value. Every facebook post made, every online search informs advertisers and helps business adapt their products. The spending habits tracked on your debit card help to know who you are and what type of products you like. One’s interaction with the current world contributes to value creation. In other words, production has moved from the workplace to all of life and has only been possible with modern communication technology and the new post-industrial economy.

The workers now are not the same as the past in this country or countries similarly situated. The left, when admitting that things have changed, will then perform backflips to also claim nothing has changed. The service sector has come to dominate, yet the left holds its orientation to be exactly the same as the factory. I was discussing this with a Trotskyist friend that worked a service sector job at a burrito joint. Since they were still payed a wage, he claimed, the form of capitalist exploitation had not changed.

Taking the example of the burrito joint, the harvesting of the lettuce, tomatoes, and other food items used to make those burritos most likely occurred in another underdeveloped country or by migrants or prisoners in this country. They receive wages much lower than those in the service sector (usually) and their labor is more vital to the economic set-up than those performing easily automated service jobs. If they did not harvest the food, my Trotskyist friend would have no lettuce to put on someone’s burrito. Building a burrito is not the same as building a highway, car, skyscraper, or harvesting fields. No kernel of a new world can be seen within this type of work, other than someone akin to a psychiatric ward.

So, how will a better world be brought about? I think anyone who believes they know the answer to this question is arrogant and needs to come back down to earth. I certainly do not know the answer. I will provide some thoughts to help answer this question.

Every single revolution has failed. The French Revolution, American Revolution, Russian Revolution, the list goes on, all have failed to usher in a world that has ended the few dominating the many. To hang on to these past conceptions of revolution is to condemn the next one to loss. This means a rethinking of fundamental questions is needed.

What does revolutionary action succeed at doing?

First and foremost, it succeeds at establishing a set of values within a subversive context. Courageousness is a good thing to find in the hearts of people, yet the soldier who goes to fight and die is “courageous.” The last goal of revolutionary action is to get people to join the armed forces. An insurrectionary act affirms notions of justice, courage, honor, right and wrong, freedom, kindness, empathy, etc. that completely negate the selfishness, materialism, and overall toxicity of the dominant values.

This is where anarchists who fetishize violence get off the mark. Simply put, just because we burn everything to the ground does not mean people stop being assholes. This is not to say, however, that these values won’t get affirmed in riots and the like. Who could say those in Ferguson, Baltimore, and many other places were not courageous with deep notions of justice, right and wrong, and freedom? These values can also be affirmed through wise grandparents going on a hike with their grandson, a teacher who treats her students as equals, a victim who stands up to their bully, a group of musicians playing carefree, a ropeswing and a group of good people, graffiti, sharing a smoke, stealing from Walmart, fighting mobilized Nazis, and many other ways.

Revolutionary action does not just happen at a march or political meeting. I’d go so far as to argue it happens at these places less of the time.

Secondly, it succeeds in taking space to keep these values and energy going. It takes space and organizes the shared life within it in a completely new way. It may even be wrong to describe this as “organized.

When hegemonic powers fall apart, power REALLY does go back to individual people. Depending on how we relate to each other flowers or weeds could grow. What seed is planted in the cracks?

How is power disrupted?

From here, we can look to the most interesting struggle to occur in the United States in many years: Standing Rock. For all its problems, the Standing Rock resistance highlighted some important things. Power is found in infrastructure. The construction of the pipeline only strengthens the world of pipelines and oil dependency. These constructions, from oil pipelines to highways to electrical system to fracking equipment, help keep this world running. Those of us who went to Standing Rock and stayed with a certain group in Sacred Stone saw the banner: “Against the Pipeline and Its World!”

Standing Rock had one camp that sat in the path of the pipeline to black construction, until forcibly removed by the police, and camps across the river. This struggle blocked the construction of a world it did not want to see and built one it did right in the space it captured. It had its own food supply, water supply, etc. It had its own logistical system, outside of government and business. It relied on the power of people.

During the Occupy movement, it seemed natural for those in Oakland to block the port. The port brought in commodities to be sold, benefitting the rich and propping up the system. It seemed like common sense for revolutionaries in Egypt to take Tahrir Square, the center of activity, block main roads, stopping people from shopping and working, and burn police stations. In fact, focusing on Tahrir square misses all the blocked roads and burnt police stations all across Egypt.

The reflex seems to be to block the flows of this world and construct new ones, to block on form of life and build many new forms.

Why do revolutions fail?

There is no good answer to this.

One reason revolt fails to materialize (among many) is activity gets pacified by liberals. This, again, could be seen at Standing Rock where those apart of “Spirit Camps” put their bodies between police and “Warrior Camps”, telling them to demobilize, leave the initiated conflict, and pray. This can be seen when liberals demask covered protesters trying to push things or these liberals even pepper spraying them when they nonviolently damage property.

Following this, one of the most inspiring revolutions in the last 100 years was snuffed out by revolutionaries giving up their power, believing it was strategic. Within the Spanish Civil War, workers in Barcelona, Aargon, and other urban and rural places took over the land and factories, abolished the government and money, and armed themselves. They subordinated themselves to Republican government authority in the belief they could win the fight against the fascists by doing this.

What was shown from this is the Republican government was no more capable of fighting fascists than autonomous armed workers. The workers should of trusted no one but themselves, being repressed by both Republican and Communist henchmen to fall in line. Both of these forces reintroduced market mechanisms and money, government authority, and other ways the few rule over the many. Contrary to their claims, these efforts did not make fighting the war any more efficient and in some ways, especially the reintroduction of market forces into the food supply, it made things much worse. In the end, the fascists still won.

The problem here is viewing the conflict in purely military terms instead of a social war. By falling in line with Republican government and military command, those in Barcelona and other places allowed them to organize social life and overall just laid the groundwork for a fascist organization. Their self organization should’ve never been sacrificed.

When revolutionaries forget their struggle is more than a military confrontation they become exactly what they are fighting against. They become their enemy. They also miss inspiring movements due to fetishizing combat. We heard the left praise the fight of Kurdish women in Rojava against ISIS, and justifiably so, yet hear nothing about grassroots councils that have sprung up and continue to survive all across Syria in spite of horrible civil war. With the collapse of the Assad dictatorship, these councils took on the role of getting electricity, distributing food and water, healing the sick and injured, and whatever else is necessary to life.

I have spoken in generalities mainly, attempting to adequately explain my reasoning yet not over complicate and bore.

Over 700 acres of the Wayne National Forest have been auctioned off to the with hydrofracturing intentions. The Wayne is not new to gas and energy exploitation, yet this is a new and intensified maneuver in the war on Ohio’s only national forest. The plan from the Bureau of Land Management is to continue resource extraction until it’s all gone and The Wayne is dead. Some people will make a profit, though…

I will live in Wayne National Forest, in a long-term occupation starting on May 12th, in hopes of changing this tide. While it would be interesting for this to fit into some wider narrative of struggle, and in some ways it naturally does, that is not my main concern. My main concern is stopping the continued energy industry’s attack on the forest.

To anyone who has resonated with what’s been written, who sees this battle as their battle, and who believes they can help, PLEASE GET INVOLVED.

EVERYONE IS WELCOME TO COME.

To read:

– Affirming Gasland by the creators of the documentary Gasland

– 1984 by George Orwell

– The Madman: His Parables and Poems by Kahlil Gibran

– The Great Divorce by C.S. Lewis

– The Worst Mistake in the History of the Human Race by Jared Diamond

– What is Civilization? by John Haywood (found in The Penguin Historical Atlas of Ancient Civilizations)

– Debt by David Graeber

– To Our Friends by The Invisible Committee

To watch:

– Gasland

– Gasland 2

The Strange Persistence of Guilt

Those of us living in the developed countries of the West find ourselves in the tightening grip of a paradox, one whose shape and character have so far largely eluded our understanding. It is the strange persistence of guilt as a psychological force in modern life. If anything, the word persistence understates the matter. Guilt has not merely lingered. It has grown, even metastasized, into an ever more powerful and pervasive element in the life of the contemporary West, even as the rich language formerly used to define it has withered and faded from discourse, and the means of containing its effects, let alone obtaining relief from it, have become ever more elusive.

This paradox has set up a condition in which the phenomenon of rising guilt becomes both a byproduct of and an obstacle to civilizational advance. The stupendous achievements of the West in improving the material conditions of human life and extending the blessings of liberty and dignity to more and more people are in danger of being countervailed and even negated by a growing burden of guilt that poisons our social relations and hinders our efforts to live happy and harmonious lives.

I use the words strange persistence to suggest that the modern drama of guilt has not followed the script that was written for it. Prophets such as Friedrich Nietzsche were confident that once the modern Western world finally threw off the metaphysical straitjacket that had confined the possibilities of all previous generations, the moral reflexes that had accompanied that framework would disappear along with them. With God dead, all would indeed be permitted. Chief among the outmoded reflexes would be the experience of guilt, an obvious vestige of irrational fear promulgated by oppressive, life-denying institutions erected in the name and image of a punitive deity.

Indeed, Nietzsche had argued in On the Genealogy of Morality (1887), a locus classicus for the modern understanding of guilt, that the very idea of God, or of the gods, originated hand-in-hand with the feeling of indebtedness (the German Schuld—“guilt”—being the same as the word for “debt,” Schulden).1 The belief in God or gods arose in primitive societies, Nietzsche speculated, out of dread of the ancestors and a feeling of indebtedness to them. This feeling of indebtedness expanded its hold, in tandem with the expansion of the concept of God, to the point that when the Christian God offered itself as “the maximal god yet achieved,” it also brought about “the greatest feeling of indebtedness on earth.”

But “we have now started in the reverse direction,” Nietzsche exulted. With the “death” of God, meaning God’s general cultural unavailability, we should expect to see a consequent “decline in the consciousness of human debt.” With the cultural triumph of atheism at hand, such a victory could also “release humanity from this whole feeling of being indebted towards its beginnings, its prima causa.” Atheism would mean “a second innocence,” a regaining of Eden with neither God nor Satan there to interfere with and otherwise corrupt the proceedings.2

This is not quite what has happened; nor does there seem to be much likelihood that it will happen, in the near future. Nietzsche’s younger contemporary Sigmund Freud has proven to be the better prophet, having offered a dramatically different analysis that seems to have been more fully borne out. In his book Civilization and Its Discontents (Das Unbehagen in der Kultur), Freud declared the tenacious sense of guilt to be “the most important problem in the development of civilization.” Indeed, he observed, “the price we pay for our advance in civilization is a loss of happiness through the heightening of the sense of guilt.”3

Such guilt was hard to identify and hard to understand, though, since it so frequently dwelled on an unconscious level, and could easily be mistaken for something else. It often appears to us, Freud argued, “as a sort of malaise [Unbehagen], a dissatisfaction,”4 for which people seek other explanations, whether external or internal. Guilt is crafty, a trickster and chameleon, capable of disguising itself, hiding out, changing its size and appearance, even its location, all the while managing to persist and deepen.

This seems to me a very rich and incisive description, and a useful starting place for considering a subject almost entirely neglected by historians: the steadily intensifying (although not always visible) role played by guilt in determining the structure of our lives in the twentieth and twenty-first centuries. By connecting the phenomenon of rising guilt to the phenomenon of civilizational advance, Freud was pointing to an unsuspected but inevitable byproduct of progress itself, a problem that will only become more pronounced in the generations to come.

Demoralizing Guilt

Thanks in part to Freud’s influence, we live in a therapeutic age; nothing illustrates that fact more clearly than the striking ways in which the sources of guilt’s power and the nature of its would-be antidotes have changed for us. Freud sought to relieve in his patients the worst mental burdens and pathologies imposed by their oppressive and hyperactive consciences, which he renamed their superegos, while deliberately refraining from rendering any judgment as to whether the guilty feelings ordained by those punitive superegos had any moral justification. In other words, he sought to release the patient from guilt’s crushing hold by disarming and setting aside guilt’s moral significance, and re-designating it as just another psychological phenomenon, whose proper functioning could be ascertained by its effects on one’s more general well-being. He sought to “demoralize” guilt by treating it as a strictly subjective and emotional matter.

Health was the only remaining criterion for success or failure in therapy, and health was a functional category, not an ontological one. And the nonjudgmental therapeutic worldview whose seeds Freud planted has come into full flower in the mainstream sensibility of modern America, which in turn has profoundly affected the standing and meaning of the most venerable among our moral transactions, and not merely matters of guilt.

Take, for example, the various ways in which “forgiveness” is now understood. Forgiveness is one of the chief antidotes to the forensic stigma of guilt, and as such has long been one of the golden words of our culture, with particularly deep roots in the Christian tradition, in which the capacity for forgiveness is seen as a central attribute of the Deity itself. In the face of our shared human frailty, forgiveness expresses a kind of transcendent and unconditional regard for the humanity of the other, free of any admixture of interest or punitive anger or puffed-up self-righteousness. Yet forgiveness rightly understood can never deny the reality of justice. To forgive, whether one forgives trespasses or debts, means abandoning the just claims we have against others, in the name of the higher ground of love. Forgiveness affirms justice even in the act of suspending it. It is rare because it is so costly.

In the new therapeutic dispensation, however, forgiveness is all about the forgiver, and his or her power and well-being. We have come a long way from Shakespeare’s Portia, who spoke so memorably in The Merchant of Venice about the unstrained “quality of mercy,” which “droppeth as the gentle rain from heaven” and blesses both “him that gives and him that takes.”5 And an even longer way from Christ’s anguished cry from the cross, “Forgive them, for they know not what they do.”6 And perhaps even further yet from the most basic sense of forgiveness, the canceling of a monetary debt or the pardoning of a criminal offense, in either case a very conscious suspension of the entirely rightful demands of justice.

We still claim to think well of forgiveness, but it has in fact very nearly lost its moral weight by having been translated into an act of random kindness whose chief value lies in the sense of personal release it gives us. “Forgiveness,” proclaimed the journalist Gregg Easterbrook writing at Beliefnet, “is good for your health.”7 Like the similar acts of confession or apology, and other transactions in the moral economy of sin and guilt, forgiveness is in danger of being debased into a kind of cheap grace, a waiving of standards entirely, standards without which such transactions have little or no moral significance. Forgiveness only makes sense in the presence of a robust conception of justice. Without that, it is in real danger of being reduced to something passive and automatic and flimsy—a sanctimonious way of saying that nothing really matters very much at all.

The Infinite Extensibility of Guilt

The therapeutic view of guilt seems to offer the guilt-ridden an avenue of escape from its power, by redefining guilt as the result of psychic forces that do not relate to anything morally consequential. But that has not turned out to be an entirely workable solution, since it is not so easy to banish guilt merely by denying its reality. There is another powerful factor at work too, one that might be called the infinite extensibility of guilt. This proceeds from a very different set of assumptions, and is a surprising byproduct of modernity’s proudest achievement: its ceaselessly expanding capacity to comprehend and control the physical world.

In a world in which the web of relationships between causes and effects yields increasingly to human understanding and manipulation, and in which human agency therefore becomes ever more powerful and effective, the range of our potential moral responsibility, and therefore of our potential guilt, also steadily expands. We like to speak, romantically, of the interconnectedness of all things, failing to recognize that this same principle means that there is almost nothing for which we cannot be, in some way, held responsible. This is one inevitable side effect of the growing movement to change the name of our geological epoch from the Holocene to the Anthropocene—the first era in the life of the planet to be defined by the effects of the human presence and human power: effects such as nuclear fallout, plastic pollution, domesticated animals, and anthropogenic climate change. Power entails responsibility, and responsibility leads to guilt.

I can see pictures of a starving child in a remote corner of the world on my television, and know for a fact that I could travel to that faraway place and relieve that child’s immediate suffering, if I cared to. I don’t do it, but I know I could. Although if I did so, I would be a well-meaning fool like Dickens’s ludicrous Mrs. Jellyby, who grossly neglects her own family and neighborhood in favor of the distant philanthropy of African missions. Either way, some measure of guilt would seem to be my inescapable lot, as an empowered man living in an interconnected world.

Whatever donation I make to a charitable organization, it can never be as much as I could have given. I can never diminish my carbon footprint enough, or give to the poor enough, or support medical research enough, or otherwise do the things that would render me morally blameless.

Colonialism, slavery, structural poverty, water pollution, deforestation—there’s an endless list of items for which you and I can take the rap. To be found blameless is a pipe dream, for the demands on an active conscience are literally as endless as an active imagination’s ability to conjure them. And as those of us who teach young people often have occasion to observe, it may be precisely the most morally perceptive and earnest individuals who have the weakest common-sense defenses against such overwhelming assaults on their over-receptive sensibilities. They cannot see a logical place to stop. Indeed, when any one of us reflects on the brute fact of our being alive and taking up space on this planet, consuming resources that could have met some other, more worthy need, we may be led to feel guilt about the very fact of our existence.

The questions involved are genuine and profound; they deserve to be asked. Those who struggle most deeply with issues of environmental justice and stewardship are often led to wonder whether there can be any way of life that might allow one to escape being implicated in the cycles of exploitation and cruelty and privilege that mark, ineluctably, our relationship with our environment. They suffer from a hypertrophied sense of guilt, and desperately seek some path to an existence free of it.

In this, they embody a tendency of the West as a whole, expressed in an only slightly exaggerated form. So excessive is this propensity toward guilt, particularly in the most highly developed nations of the Western world, that the French writer Pascal Bruckner, in a courageous and brilliant recent study called The Tyranny of Guilt (in French, the title is the slightly different La tyrannie de la pénitence), has identified the problem as “Western masochism.” The lingering presence of “the old notion of original sin, the ancient poison of damnation,” Bruckner argues, holds even secular philosophers and sociologists captive to its logic.8

For all its brilliance, though, Bruckner’s analysis is not fully adequate. The problem goes deeper than a mere question of alleged cultural masochism arising out of vestigial moral reflexes. It is, after all, not merely our pathologies that dispose us in this direction. The pathologies themselves have an anterior source in the very things that make us proudest: our knowledge of the world, of its causes and effects, and our consequent power to shape and alter those causes and effects. The problem is perfectly expressed in T.S. Eliot’s famous question “After such knowledge, what forgiveness?”9 In a world of relentlessly proliferating knowledge, there is no easy way of deciding how much guilt is enough, and how much is too much.

Stolen Suffering

Notwithstanding all claims about our living in a post-Christian world devoid of censorious public morality, we in fact live in a world that carries around an enormous and growing burden of guilt, and yearns—sometimes even demands—to be free of it. About this, Bruckner could not have been more right. And that burden is always looking for an opportunity to discharge itself. Indeed, it is impossible to exaggerate how many of the deeds of individual men and women can be traced back to the powerful and inextinguishable need of human beings to feel morally justified, to feel themselves to be “right with the world.” One would be right to expect that such a powerful need, nearly as powerful as the merely physical ones, would continue to find ways to manifest itself, even if it had to do so in odd and perverse ways.

Which brings me to a very curious story, full of significance for these matters. It comes from a New York Times op-ed column by Daniel Mendelsohn, published on March 9, 2008, and aptly titled “Stolen Suffering.”10 Mendelsohn, a Bard College professor who had written a book about his family’s experience of the Holocaust, told of hearing the story of an orphaned Jewish girl who trekked 2,000 miles from Belgium to Ukraine, surviving the Warsaw ghetto, murdering a German officer, and taking refuge in forests where she was protected by kindly wolves. The story had been given wide circulation in a 1997 book, Misha: A Mémoire of the Holocaust Years, and its veracity was generally accepted. But it was eventually discovered to be a complete fabrication, created by a Belgian Roman Catholic named Monique De Wael.11

Such a deception, Mendelsohn argued, is not an isolated event. It needs to be understood in the context of a growing number of “phony memoirs,” such as the notorious child-survivor Holocaust memoir Fragments, or Love and Consequences, the putative autobiography of a young mixed-race woman raised by a black foster mother in gang-infested Los Angeles.12 These books were, as Mendelsohn said, “a plagiarism of other people’s trauma,” written not, as their authors claimed, “by members of oppressed classes (the Jews during World War II, the impoverished African-Americans of Los Angeles today), but by members of relatively safe or privileged classes.” Interestingly, too, he noted that the authors seemed to have an unusual degree of identification with their subjects—indeed, a degree of identification approaching the pathological. Defending Misha, De Wael declared, astonishingly, that “the story is mine…not actually reality, but my reality, my way of surviving.”13

What these authors have appropriated is suffering, and the identification they pursue is an identification not with certifiable heroes but with certifiable victims. It is a particular and peculiar kind of identity theft. How do we account for it? What motivates it? Why would comfortable and privileged people want to identify with victims? And why would their efforts appeal to a substantial reading public?

Or, to pose the question even more generally, in a way that I think goes straight to the heart of our dilemma: How can one account for the rise of the extraordinary prestige of victims, as a category, in the contemporary world?

I believe that the explanation can be traced back to the extraordinary weight of guilt in our time, the pervasive need to find innocence through moral absolution and somehow discharge one’s moral burden, and the fact that the conventional means of finding that absolution—or even of keeping the range of one’s responsibility for one’s sins within some kind of reasonable boundaries—are no longer generally available. Making a claim to the status of certified victim, or identifying with victims, however, offers itself as a substitute means by which the moral burden of sin can be shifted, and one’s innocence affirmed. Recognition of this substitution may operate with particular strength in certain individuals, such as De Wael and her fellow hoaxing memoirists. But the strangeness of the phenomenon suggests a larger shift of sensibility, which represents a change in the moral economy of sin. And almost none of it has occurred consciously. It is not something as simple as hypocrisy that we are seeing. Instead, it is a story of people working out their salvation in fear and trembling.

The Moral Economy of Sin

In the modern West, the moral economy of sin remains strongly tied to the Judeo-Christian tradition, and the fundamental truth about sin in the Judeo-Christian tradition is that sin must be paid for or its burden otherwise discharged. It can neither be dissolved by divine fiat nor repressed nor borne forever. In the Jewish moral world in which Christianity originated, and without which it would have been unthinkable, sin had always had to be paid for, generally by the sacrificial shedding of blood; its effects could never be ignored or willed away. Which is precisely why, in the Christian context, forgiveness of sin was specifically related to Jesus Christ’s atoning sacrifice, his vicarious payment for all human sins, procured through his death on the cross and made available freely to all who embraced him in faith. Forgiveness has a stratospherically high standing in the Christian faith. But it is grounded in fundamental theological and metaphysical beliefs about the person and work of Christ, which in turn can be traced back to Jewish notions of sin and how one pays for it. It makes little sense without them. Forgiveness, or expiation, or atonement—all of these concepts promising freedom from the weight of guilt are grounded in a moral transaction, enacted within the universe of a moral economy of sin.

But in a society that retains its Judeo-Christian moral reflexes but has abandoned the corresponding metaphysics, how can the moral economy of sin continue to operate properly, and its transactions be effectual? Can a credible substitute means of discharging the weight of sin be found? One workable way to be at peace with oneself and feel innocent and “right with the world” is to identify oneself as a certifiable victim—or better yet, to identify oneself with victims. This is why the Mendelsohn story is so important and so profoundly indicative, even if it deals with an extreme case. It points to the way in which identification with victims, and the appropriation of victim status, has become an irresistible moral attraction. It suggests the real possibility that claiming victim status is the sole sure means left of absolving oneself and securing one’s sense of fundamental moral innocence. It explains the extraordinary moral prestige of victimhood in modern America and Western society in general.

Why should that be so? The answer is simple. With moral responsibility comes inevitable moral guilt, for reasons already explained. So if one wishes to be accounted innocent, one must find a way to make the claim that one cannot be held morally responsible. This is precisely what the status of victimhood accomplishes. When one is a certifiable victim, one is released from moral responsibility, since a victim is someone who is, by definition, not responsible for his condition, but can point to another who is responsible.

But victimhood at its most potent promises not only release from responsibility, but an ability to displace that responsibility onto others. As a victim, one can project onto another person, the victimizer or oppressor, any feelings of guilt he might harbor, and in projecting that guilt lift it from his own shoulders. The result is an astonishing reversal, in which the designated victimizer plays the role of the scapegoat, upon whose head the sin comes to rest, and who pays the price for it. By contrast, in appropriating the status of victim, or identifying oneself with victims, the victimized can experience a profound sense of moral release, of recovered innocence. It is no wonder that this has become so common a gambit in our time, so effectively does it deal with the problem of guilt—at least individually, and in the short run, though at the price of social pathologies in the larger society that will likely prove unsustainable.

Grievance—and Penitence—on a Global Scale

All of this confusion and disruption to our most time-honored ways of handling the dispensing of guilt and absolution creates enormous problems, especially in our public life, as we assess questions of social justice and group inequities, which are almost impossible to address without such morally charged categories coming into play. Just look at the incredible spectacle of today’s college campuses, saturated as they are with ever-more-fractured identity politics, featuring an ever-expanding array of ever-more-minute grievances, with accompanying rounds of moral accusation and declarations of victimhood. These phenomena are not merely a fad, and they did not come out of nowhere.

Similar categories also come into play powerfully when the issues in question are ones relating to matters such as the historical guilt of nations and their culpability or innocence in the international sphere. Such questions are ubiquitous, as never before.

In the words of political scientist Thomas U. Berger, “We live in an age of apology and recrimination,” and he could not be more right.14 Guilt is everywhere around us, and its potential sources have only just begun to be plumbed, as our understanding of the buried past widens and deepens.

Gone is the amoral Hobbesian notion that war between nations is merely an expression of the state of nature. The assignment of responsibility for causing a war, the designation of war guilt, the assessment of punishments and reparations, the identification and prosecution of war crimes, the compensation of victims, and so on—all of these are thought to be an essential part of settling a war’s effects justly, and are part and parcel of the moral economy of guilt as it now operates on the national and international levels.

The heightened moral awareness we now bring to international affairs is something new in human history, stemming from the growing social and political pluralism of Western democracies and the unprecedented influence of universalized norms of human rights and justice, supported and buttressed by a robust array of international institutions and nongovernmental organizations ranging from the International Criminal Court to Amnesty International.

In addition, the larger narratives through which a nation organizes and relates its history, and through which it constitutes its collective memory, are increasingly subject to monitoring and careful scrutiny by its constituent ethnic, linguistic, cultural, and other subgroups, and are responsive to demands that those histories reflect the nation’s past misdeeds and express contrition for them. Never has there been a keener and more widespread sense of particularized grievances at work throughout in the world, and never have such grievances been able to count on receiving such a thorough and generally sympathetic hearing from scholars and the general public.

Indeed, it is not an exaggeration to say that one could not begin to understand the workings of world politics today without taking into account a whole range of morally charged questions of guilt and innocence. How can one fully understand the decision by Chancellor Angela Merkel to admit a million foreign migrants a year into Germany without first understanding how the powerfully the burden of historical guilt weighs upon her and many other Germans? Such factors are now as much a part of historical causation and explanation as such standbys as climate, geography, access to natural resources, demographics, and socioeconomic organization.

There is no disputing the fact, then, that history itself, particularly in the form of “coming to terms with” the wrongs of the past and the search for historical justice, is becoming an ever more salient element in national and international politics. We see it in the concern over past abuses of indigenous peoples, colonized peoples, subordinated races and classes, and the like, and we see it in the ways that nations relate their stories of war. Far from being buried, the past has become ever more alive with moral contestation.

Perhaps the most impressive example of sustained collective penitence in human history has come from the government and people of Germany, who have done so much to atone for the sins of Nazism. But how much penitence is enough? And how long must penance be done? When can we say that the German people—who are, after all, an almost entirely different cast of characters from those who lived under the Nazis—are free and clear, and have “paid their debt” to the world and to the past, and are no longer under a cloud of suspicion? Who could possibly make that judgment? And will there come a day—indeed, has it already arrived, with the nation’s backlash against Chancellor Merkel’s immigration blunders?—when the Germans have had enough of the Sisyphean guilt which, as it may seem to them, they have been forced by other sinful nations to bear, and begin to seek their redemption by other means?

Who, after all, has ever been pure and wise enough to administer such postwar justice with impartiality and detachment, and impeccable moral credibility? What nation or entity at the close of World War II was sufficiently without sin to cast the decisive stone? The Nuremberg and Tokyo war crimes trials were landmarks in the establishment of institutional entities administering and enforcing international law. But they also were of questionable legality, reflecting the imposition of ad hoc, ex post facto laws, administered by victors whose own hands were far from entirely clean (consider the irony of Soviet judges sitting in judgment of the same kinds of crimes their own regime committed with impunity)—indeed, victors who might well have been made to stand trial themselves, had the tables been turned, and the subject at hand been the bombing of civilian targets in Hiroshima and Dresden.

Or consider whether the infamous Article 231 in the Treaty of Versailles, assigning “guilt” to Germany for the First World War, was not, in the very attempt to impose the victor’s just punishment on a defeated foe, itself an act of grave injustice, the indignity of which surely helped to precipitate the catastrophes that followed it. The assignment of guilt, especially exclusive guilt, to one party or another may satisfy the most urgent claims of justice, or the desire for retribution, but may fail utterly the needs of reconciliation and reconstruction. As Elazar Barkan bluntly argued in his book The Guilt of Nations, “In forcing an admission of war guilt at Versailles, rather than healing, the victors instigated resentment that contributed to the rise of Fascism.”15 The work of healing, like the work of the Red Cross, has a claim all its own, one that is not always compatible with the utmost pursuit of justice (although it probably cannot succeed in the complete absence of such a pursuit). Nor does such an effort to isolate and assign exclusive guilt meet the needs of a more capacious historical understanding, one that understands, as Herbert Butterfield once wrote, that history is “a clash of wills out of which there emerges something that no man ever willed.”16 And, he might have added, in which no party is entirely innocent.

So once again we find ourselves confronting the paradox of sin that cannot be adequately expiated. The deeply inscribed algorithm of sin demands some kind of atonement, but for some aspects of the past there is no imaginable way of making that transaction without creating new sins of equivalent or greater dimension. What possible atonement can there be for, say, the institution of slavery? It is no wonder that the issue of reparations for slavery surfaces periodically, and probably always will, yet it is simply beyond the power of the present or the future to atone for the sins of the past in any effective way. Those of us who teach history, and take seriously the moral formation of our students, have to consider what the takeaway from this is likely to be. Do we really want to rest easy with the idea that a proper moral education needs to involve a knowledge of our extensive individual and collective guilt—a guilt for which there is no imaginable atonement? That this is not a satisfactory state of affairs would seem obvious; what to do about it, particularly in a strictly secular context, is another matter.

Again, the question arises whether and to what extent all of this has something to do with our living in a world that has increasingly, for the past century or so, been run according to secular premises, using a secular vocabulary operating within an “immanent frame”—a mode of operation that requires us to be silent about, and forcibly repress, the very religious frameworks and vocabularies within which the dynamics of sin and guilt and atonement have hitherto been rendered intelligible. I use the term “repress” here with some irony, given its Freudian provenance. But even the irreligious Freud did not envision the “liberation” of the human race from its religious illusions as an automatic and sufficient solution to its problems. He saw nothing resembling a solution. Indeed, it could well be the case, and paradoxically so, that just at the moment when we have become more keenly aware than ever of the wages of sin in the world, and more keenly anxious to address those sins, we find ourselves least able to describe them in those now-forbidden terms, let alone find moral release from their weight. Andrew Delbanco puts it quite well in his perceptive and insightful 1995 book The Death of Satan:

We live in the most brutal century in human history, but instead of stepping forward to take the credit, the devil has rendered himself invisible. The very notion of evil seems to be incompatible with modern life, from which the ideas of transgression and the accountable self are fast receding. Yet despite the loss of old words and moral concepts—Satan, sin, evil—we cannot do without some conceptual means for thinking about the universal human experience of cruelty and pain…. If evil, with all its insidious complexity, escape the reach of our imagination, it will have established dominion over us all.17

So there are always going to be consequences attendant upon the disappearance of such words, and they may be hard to foresee, and hard to address. “Whatever became of sin?” asked the psychiatrist Karl Menninger, in his 1973 book of that title. What, in the new arrangements, can accomplish the moral and transactional work that was formerly done by the now-discarded concepts? If, thanks to Nietzsche, the absence of belief in God is “the notional condition of modern Western culture,” as Paula Fredriksen argues in her study of the history of the concept of sin, doesn’t that mean that the idea of sin is finished too?18

Yes, it would seem to mean just that. After all, “sin” cannot be understood apart from a larger context of ideas. So what happens when all the ideas that upheld “sin” in its earlier sense have ceased to be normatively embraced? Could not the answer to Menninger’s question be something like Zarathustra’s famous cry: “Sin is dead and we have killed it!”?

Sin is a transgression against God, and without a God, how can there be such a thing as sin? So the theory would seem to dictate. But as Fredriksen argues, that theory fails miserably to explain the world we actually inhabit. Sin lives on, it seems, even if we decline to name it as such. We live, she says, in the web of culture, and “the biblical god…seems to have taken up permanent residence in Western imagination…[so much so that] even nonbelievers seem to know exactly who or what it is that they do not believe in.”19 In fact, given the anger that so many nonbelievers evince toward this nonexistent god, one might be tempted to speculate whether their unconscious cry is “Lord, I do not believe; please strengthen my belief in your nonexistence!” Such was Nietzsche’s genius in communicating how difficult an achievement a clean and unconditional atheism is, a conundrum that he captured not by asserting that God does not exist, but that God is dead. For the existence of the dead constitutes, for us, a presence as well as an absence. It is not so easy to wish that enduring presence away, particularly when there is the lingering sense that the presence was once something living and breathing.

What makes the situation dangerous for us, as Fredriksen observes, is not only the fact that we have lost the ability to make conscious use of the concept of sin but that we have also lost any semblance of a “coherent idea of redemption,”20the idea that has always been required to accompany the concept of sin in the past and tame its harsh and punitive potential. The presence of vast amounts of unacknowledged sin in a culture, a culture full to the brim with its own hubristic sense of world-conquering power and agency but lacking any effectual means of achieving redemption for all the unacknowledged sin that accompanies such power: This is surely a moral crisis in the making—a kind of moral-transactional analogue to the debt crisis that threatens the world’s fiscal and monetary health. The rituals of scapegoating, of public humiliation and shaming, of multiplying morally impermissible utterances and sentiments and punishing them with disproportionate severity, are visibly on the increase in our public life. They are not merely signs of intolerance or incivility, but of a deeper moral disorder, an Unbehagen that cannot be willed away by the psychoanalytic trick of pretending that it does not exist.

The Persistence of Guilt

Where then does this analysis of our broken moral economy leave us? The progress of our scientific and technological knowledge in the West, and of the culture of mastery that has come along with it, has worked to displace the cultural centrality of Christianity and Judaism, the great historical religions of the West. But it has not been able to replace them. For all its achievements, modern science has left us with at least two overwhelmingly important, and seemingly insoluble, problems for the conduct of human life. First, modern science cannot instruct us in how to live, since it cannot provide us with the ordering ends according to which our human strivings should be oriented. In a word, it cannot tell us what we should live for, let alone what we should be willing to sacrifice for, or die for.

And second, science cannot do anything to relieve the guilt weighing down our souls, a weight to which it has added appreciably, precisely by rendering us able to be in control of, and therefore accountable for, more and more elements in our lives—responsibility being the fertile seedbed of guilt. That growing weight seeks opportunities for release, seeks transactional outlets, but finds no obvious or straightforward ones in the secular dispensation. Instead, more often than not we are left to flail about, seeking some semblance of absolution in an incoherent post-Christian moral economy that has not entirely abandoned the concept of sin but lacks the transactional power of absolution or expiation without which no moral system can be bearable.

What is to be done? One conclusion seems unavoidable. Those who have viewed the obliteration of religion, and particularly of Judeo-Christian metaphysics, as the modern age’s signal act of human liberation need to reconsider their dogmatic assurance on that point. Indeed, the persistent problem of guilt may open up an entirely different basis for reconsidering the enduring claims of religion. Perhaps human progress cannot be sustained without religion, or something like it, and specifically without something very like the moral economy of sin and absolution that has hitherto been secured by the religious traditions of the West.

Such an argument would have little to do with conventional theological apologetics. Instead, it would draw from empirical realities regarding the social and psychological makeup of advanced Western societies. And it would fully face the fact that, without the support of religious beliefs and institutions, one may have no choice but to accept the dismal prospect envisioned by Freud, in which the advance of human civilization brings not happiness but a mounting tide of unassuaged guilt, ever in search of novel and ineffective, and ultimately bizarre, ways to discharge itself. Such an advance would steadily diminish the human prospect, and render it less and less sustainable. It would smother the energies of innovation that have made the West what it is, and fatally undermine the spirited confidence needed to uphold the very possibility of progress itself. It must therefore be countered. But to be countered, it must first be understood.

Endnotes

  1. The discussion that follows is drawn from the second essay in Friedrich Nietzsche, On the Genealogy of Morality, ed. Keith Ansell-Pearson, trans. Carol Diethe (Cambridge, England: Cambridge University Press, 2006), 35–67. First published 1887. I here take note of the fact that any discussion of guilt per se runs the risk of conflating different meanings of the word: guilt as a forensic or objective term, guilt as culpability, is not the same thing as guilt as a subjective or emotional term. It is the difference between being guilty and feeling guilty, a difference that is analytically clear, but often difficult to sustain in discussions of particular instances.
  2. Ibid., 61–62.
  3. Sigmund Freud, Civilization and Its Discontents, trans. James Strachey (New York, NY: Norton, 2005), 137, 140. First published 1930.
  4. Ibid., 140.
  5. William Shakespeare, The Merchant of Venice, Act 4, Scene 1, lines 184–205; see e.g., Stanley Wells, and Gary Taylor, eds., The Oxford Shakespeare: The Complete Works, second edition (Oxford, England: Oxford University Press, 2005), 473.
  6. Luke 23:34 (Revised Standard Version).
  7. Gregg Easterbrook, “Forgiveness is Good for Your Health,” Beliefnet, n.d., http://www.beliefnet.com/wellness/health/2002/03/forgiveness-is-good-for-your-health.aspx. Accessed 5 January 2017.
  8. Pascal Bruckner, The Tyranny of Guilt: An Essay on Western Masochism, trans. Steven Rendall (Princeton, NJ: Princeton University Press, 2010), 1–4.
  9. T.S. Eliot, “Gerontion,” line 34, in The Complete Poems and Plays: 1909–1950 (Orlando, FL: Harcourt Brace Jovanovich, 1971), 22. The poem was first published in 1920.
  10. Daniel Mendelsohn, “Stolen Suffering,” New York Times, March 9, 2008, WK12, http://www.nytimes.com/2008/03/09/opinion/09mendelsohn.html?_r=0.
  11. The book was Misha: A Mémoire of the Holocaust Years (Boston, MA: Mount Ivy Press, 1997), and the author published it under the name Misha Defonseca. According to the Belgian newspaper Le Soir, De Wael was the daughter of parents who had collaborated with the Nazis: see David Mehegan, “Misha and the Wolves,” Off the Shelf (blog), Boston Globe, March 3, 2008, http://www.boston.com/ae/books/blog/2008/03/misha_and_the_w.html.
  12. Binjamin Wilkomirski, Fragments: Memories of a Wartime Childhood (New York, NY: Schocken, 1997); Margaret B. Jones, Love and Consequences: A Memoir of Hope and Survival (New York, NY: Riverhead, 2008).
  13. In a final twist of the case, in May 2014 the Massachusetts Court of Appeals ruled that De Wael had to forfeit the $22.5 million in royalties she had received for Misha. Quotation from Lizzie Dearden, “Misha Defonseca: Author Who Made Up Holocaust Memoir Ordered to Repay £13.3m,” The Independent, May 12, 2014, http://www.independent.co.uk/arts-entertainment/books/news/author-who-made-up-bestselling-holocaust-memoir-ordered-to-repay-133m-9353897.html; additional details from Jeff D. Gorman, “Bizarre Holocaust Lies Support Publisher’s Win,” Courthouse News Service, May 8, 2014, http://www.courthousenews.com/2014/05/08/67710.htm.
  14. Thomas U. Berger, War, Guilt, and World Politics after World War II (New York, NY: Cambridge University Press, 2012), 8.
  15. Elazar Barkan, The Guilt of Nations: Restitution and Negotiating Historical Injustices (Baltimore, MD: Johns Hopkins University Press, 2000), xxxiii.
  16. Herbert Butterfield, The Whig Interpretation of History (New York, NY: Norton, 1965), 45–47.
  17. Andrew Delbanco, The Death of Satan: How Americans Have Lost the Sense of Evil (New York, NY: Farrar, Straus and Giroux, 1995), 9.
  18. Paula Fredriksen, Sin: The Early History of an Idea (Princeton, NJ: Princeton University Press, 2012), 149.
  19. Ibid.
  20. Ibid., 150.

Wilfred M. McClay is G.T. and Libby Blankenship Chair in the History of Liberty and director of the Center for the History of Liberty at the University of Oklahoma.

Reprinted from The Hedgehog Review 19.1 (Spring 2017). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.

Thomas Ligotti: The Red Tower

Perhaps it seems that I have said too much about the Red Tower, and perhaps it has sounded far too strange. Do not think that I am unaware of such things. But as I have noted throughout this document, I am only repeating what I have heard. I myself have never seen the Red Tower—no one ever has, and possibly no one ever will.

—Thomas Ligotti, The Nightmare Factory

Thomas Ligotti does not see the world as you and I. He does not see the world at all. Rather he envisions another, separate realm of description, a realm that sits somewhere between the interstices of the visible and invisible, a twilight zone of shifting semblances, echoes of our world. Each of his stories is neither a window onto that realm, nor a mirror of its dark recesses but rather a promise of nightmares that travel among us like revenants seeking a habitation. Reading his stories awakens not the truth of this mad world, but shapes our psyches toward the malformed madness that surrounds us always. For we inhabit the secure regions of a fake world, a collective hallucination of the universal decay not knowing or wishing to know the truth in which we live and have our being.

The security filters that wipe out the traces of the real world are lacking in Ligotti. The system of tried and tested traps that keep us safely out of the nightmare lands never took hold of Ligotti’s keen mind. Rather he inhabits a hedge world, a fence between the realms of the noumenal and phenomenal, appearance and reality. But it is not a dual world. There is no separate realm beyond this one, only the “mind-made manacles” as William Blake called them of the self-imposed collective security regimes we call the human realm. Only the filters of language, culture, and civilization protect us from the dark truth of the universe in all its nightmare glory.

Speaking of the dark marvels of our blank universe of entropic decay, of the endless sea of blackness surrounding those small pools of light in the starry firmament, Ligotti contemplates creation:

Dreaming upon the grayish desolation of that landscape, I also find it quite easy to imagine that there might have occurred a lapse in the monumental tedium, a spontaneous and inexplicable impulse to deviate from a dreary perfection, perhaps even an unconquerable desire to risk a move toward a tempting defectiveness.

For Ligotti the universe is not so much a place where gods or God, demons or Devils vie for the souls of humans, but is a realm of impersonal forces that have neither will nor intelligence. A realm of malevolence only in the sense that it cares not one iota for its progeny, of its endless experiments, its defective and deviant children. It only knows movement and change, process and the swerve away from perfection. This is our universe, as Wallace Stevens once said so eloquently in the Poem of Our Climate,

There would still remain the never-resting mind,
So that one would want to escape, come back
To what had been so long composed.
The imperfect is our paradise.
Note that, in this bitterness, delight,
Since the imperfect is so hot in us,
Lies in flawed words and stubborn sounds.

Between entropy decay and negentropic creativity we move in a dark vitality of organic and inorganic motion, our minds blessed or cursed with awareness. And, yet, most of us are happily forgetful of our state of being and becoming, unaware of the murderous perfection against which our flawed lives labour. We are blessed with forgetfulness and sleep, oblivious of the machinery of creation that seeks our total annihilation. For life is a rift in the calm perfection of eternity, a rupture in the quietude of perfection that is the endless sea of nothingness. We are the enemies of this dead realm of endless night and universal decay. With us an awareness of the mindless operations of a negentropic process and movement to tilt the balance of the universal apathy was begun. We are the children of a corrupt thought, an imperfect and flawed creation that should not have been. And all the forces of perfection have been set loose to entrap us and bring the ancient curse to an end.

Speaking of this Ligotti will remind us that

An attempt was made to reclaim the Red Tower, or at least to draw it back toward the formless origins of its being. I am referring, of course, to that show of force which resulted in the evaporation of the factory’s dense arsenal of machinery. Each of the three stories of the Red Tower had been cleaned out, purged of its offending means of manufacturing novelty items, and the part of the factory that rose above the ground was left to fall into ruins.

Yes, we are an afterthought, a mere copy of a copy, experimental actors in a universal factory that has gone through many editions, fought many wars before us, many worlds. Many universes of manufactured realities have come before ours. We are not special in this regard, but are instead the next in a long line of novelty products of a process that is mindless in intent, yet long in its devious and malevolent course toward imperfection. Or as Ligotti puts it:

Dreaming upon the grayish desolation of that landscape, I also find it quite easy to imagine that there might have occurred a lapse in the monumental tedium, a spontaneous and inexplicable impulse to deviate from a dreary perfection, perhaps even an unconquerable desire to risk a move toward a tempting defectiveness. As a concession to this impulse or desire out of nowhere, as a minimal surrender, a creation took place and a structure took form where there had been nothing of its kind before. I picture it, at its inception, as a barely discernible irruption in the landscape, a mere sketch of an edifice, possibly translucent when making its first appearance, a gray density rising in the grayness, embossed upon it in a most tasteful and harmonious design. But such structures or creations have their own desires, their own destinies to fulfil, their own mysteries and mechanisms which they must follow at whatever risk.

Our world is that deviation, that experimental factory in a gray sea of desolation, a site where novelties of a “hyper-organic” variety are endlessly produced with a desire of their own. Describing the nightmare of organicicity Ligotti offers us a picture of the machinic system of our planetary life

On the one hand, they manifested an intense vitality in all aspects of their form and function; on the other hand, and simultaneously, they manifested an ineluctable element of decay in these same areas. That is to say that each of these hyper-organisms, even as they scintillated with an obscene degree of vital impulses, also, and at the same time, had degeneracy and death written deeply upon them. In accord with a tradition of dumbstruck insanity, it seems the less said about these offspring of the birthing graves, or any similar creations, the better. I myself have been almost entirely restricted to a state of seething speculation concerning the luscious particularities of all hyper-organic phenomena produced in the subterranean graveyard of the Red Tower.

We know nothing of the teller of the tales, only that everything he describes is at second hand, a mere reflection of a reflection, a regurgitated fragment from the demented crew of the factory who have all gone insane: “I am only repeating what I have heard. I myself have never seen the Red Tower—no one ever has, and possibly no one ever will.”

Bound to our illusions, safely tucked away in the collective madness of our “human security regimes” (Nick Land), we catch only glimpses of the blood soaked towers of the factory of the universal decay surrounding us. Ligotti, unlike us, lives in this place of no place, burdened with the truth, with the sight of the universe as it is, unblinkered by the rose tented glasses of our cultural machinery. Ligotti sees into things, and what he’s discovered is the malevolence of a endless imperfection that is gnawing away at the perfection of nothingness. Ligotti admits he has no access to the machinery of the world, only its dire reflection and echo in others who have gone insane within its enclosed factory and assemblage. Echoing the mad echoes of the insane he repeats the gestures of the unknown and unknowable in the language of a decaying empire of mind. To read Ligotti is to sift through the cinders of a decaying and dying earth, to listen to the morbidity of our birthing pains, to view “the gray and featureless landscapes” of our mundane lives as we spend our days in mindless oblivion of the dark worlds that encompass us.

Broken in mind and body, caught in the mesh of a world in decay and imperfection, Ligotti sends us messages from the asylums of solitude, a figure in the dark of our times, an outrider from the hells of our impersonal and indifferent chaosmos. His eyes gaze upon that which is both the ill-fame night and the daily terror of his short life. He gifts us with his nightmares, and suffers for us the cold extremity of those stellar regions of the soul we dare not enter. Bound to the wheel of horror he discovers the tenuous threads that provide us guideposts and liminal puzzles from the emptiness of which we are made. In an essay on Heidegger, Nick Land once remarked that “Return, which is perhaps the crucial thought of modernity, must now be read elsewhere. The dissolution of humanism is stripped even of the terminology which veils collapse in the mask of theoretical mastery. It must be hazarded to poetry.”1 In Ligotti the hazard is the poetry of the mind facing the contours of a universe of corruption that is in itself beautiful as the cold moon glowing across the blue inflamed eyes of a stranger, her gaze alight with the suns dying embers and the shifting afterglow of the moons bone smile.

Or, as Ligotti’s interlocutor says in summation:

I must keep still and listen for them; I must keep quiet for a terrifying moment. Then I will hear the sounds of the factory starting up its operations once more. Then I will be able to speak again of the Red Tower.

Listen for the machinery of creation to start up again, to hear the martialing of new universes arising out of the void; for the blinding light of annihilation that will keep step with the logic of purification and transcendence that has trapped us in this dark cave of mind till language, man, and creation are folded back into that immanent world from which they were sprung. Then we, too, might begin speaking the words that will produce in us that which is more than ourselves.

Thomas Ligotti: The Red Tower


  1. Land, Nick. Fanged Noumena: Collected Writings 1987 – 2007 (Kindle Locations 1158-1159). Urbanomic/Sequence Press. Kindle Edition.