Kamala, who couldn’t sing,
kept the beat and kept it strong.
Back in the time when I was required to read Orwell in school, we did it from the moral heights of a soon-to-be Cold War victor. In 1984, America still considered itself good and right and the enemy an “evil empire.” While we couldn’t know what would happen within the next five years or so, I think we could sense our Cold War victory coming to fruition. We also processed Orwell’s imagined dystopia through our “we share the same biology, regardless of ideology” (Sting) worldview. We all knew that the Soviets weren’t literally forced to believe that “two and two made five,” but didn’t really understand to what this allegory really applied.
It is in today’s America where I can see the examples of Orwell’s absurdities brought to life. For most who read Orwell, I suspect part of the disconnect is the gap between his context and our own. First off, he modeled 1984 more off the Nazis than the Soviets (although, Animal Farm, also required reading, allegorizes the latter). He writes from a time where the sins of Hitler’s Germany were not only fresh in everyone’s minds, but based on current knowledge of actual Nazi atrocities. Certainly, the general public was not privy to the inner-workings of the German war machine. However, unlike today, the fear of Nazis was based on the experience of a war just fought and far less a projection of modern fears and prejudice. Similarly, in using the concept of “2 + 2 = 4” as a stand-in for “obvious and irrefutable truth,” he could likely rely on the education of his readers to understand the historical power behind that statement. That equation, as stand-in for natural law, is nearly as old as the Church of England herself. Orwell and his readers might also be aware of Descartes’ use of it in his 1641 Meditationes de Prima Philosophia, where he explores the relationship between self-evidence, truth, and reality. Are they indeed all equivalent? How can we know?
In any case, today’s society lacks both the memory of a World War and that generation’s classical education to help us understand Orwell’s warnings from Orwell’s own perspective.
As much as Orwell seems to have predicted some of what we are going through now, we haven’t come to here in a way he would recognize. For me, it requires the perspective of those who’ve lived through Soviet Occupation to bring help make sense of the purpose of (seemingly) counter-productive social engineering. Soviet (and presumably Nazi) dogma included a combination of actual facts, politically necessary falsehoods, and outright absurdities. Voltaire (as Orwell’s education would have learnt him) explained why the last is necessary. Forcing you to accept that which is obviously not true also helps you swallow those less obviously false, but politically necessary, falsehoods upon which the system relies. Voltaire also explains how by getting into your mind and adjusting what you think, a totalitarian can also control what you would do. Ultimately you will cease to even question doing things that would otherwise be unthinkable.
From someone who grew up in a conquered Soviet territory, I was given further insight. Under such a system, you are constantly required to remember what you’re allowed to say and what you’re not. The fact that you can’t actually reason out right from wrong or good from bad means you do have to carefully remember. The more ridiculous, absurd, and even obviously, self-evidently false the “fact” you must recite, the better. With such requirements, logical thinking becomes counter-productive. A liability. It is actually dangerous to think critically about what you’re told when that risks your arriving at a thought whereby you might, unknowingly, contradict Party-proscribed truth. The result is a docile society that must actively avoid questioning anything – just in case it can get one in trouble.
Seeing Stalin’s rules reflected in today’s America should frighten everyone. Probably not everyone sees it. Maybe even history will prove that I am overreacting; that society can take this in stride and resolve it as we’ve always resolved our political questions. However, that’s not what I wanted to write about today.
Back in the 1980s, I had a college roommate who was into the Weekly World News. We didn’t have money to throw around, but he could still occasionally talk us into picking up a copy of this tabloid of tabloids. This printed newspaper* was filled with crazy stories about Elvis being seen alive, evidence of psychic powers, UFO sightings, and the like. The articles were written in complete earnestness, no matter how crazy the topic.
Having developed a taste for the Weekly World News habit, I began to take an interest in its details. Many of the stories contained therein were not made up out of whole cloth, but sourced from other news outlets. One of those sources was (apparently) legitimate newspapers from Russia. Starting in the late 80s, with Gorbachev’s Glasnost, the Soviet official news agency began publishing reports of UFO sightings. The most famous is referred to as the “Voronezh UFO incident,” where, supposedly, a group of kids not only saw aliens but actually witnessed the abduction of a teenager. This was not first UFO story to be reported as fact by TASS (acronym for Telegraph Agency of the Soviet Union in Russian) and I’m not sure it was the last. The Soviet Unions days, however, were numbered.
In the early 1990s, after the Soviet breakup, fanciful stories detailing all kinds of supernatural activity were reported throughout Russia and made its way to Americans via the Weekly World News. It wasn’t just UFOs. All manner of “news of the weird” seemed to be readily available to Russian readers.
This comes to mind today as I read, in this past weekend’s Wall St. Journal, an opinion piece about the UFO and the military (In the Mood for UFOs?). Columnist Holman W. Jenkins, Jr. sees the currently-popular story (which I’ll admit I have not been following) as one more example of the decline of professional journalism in America. Is this sufficient to explain what we are seeing?
This may all be looking for meaning where none is to be found. The instinct that leads me to correlate Russian newspaper articles with Orwellian thought control is the same that leads people to believe in UFOs in the first place. Our minds are designed to create patterns out of what we see. Particularly when the world is throwing scary, confusing, and (often) contradictory information at us, we want to believe that it all fits into some grand scheme. Whether that scheme be benign or sinister, in either we find comfort.
An association of UFO reporting to Soviet psychological manipulation is hard to justify. Most of the Russian stories surfaced only post-Soviet Union. For those that weren’t, they only seemed to appear in the last year or two before the fall of The Wall. The Soviet authorities themselves had little patience for the supernatural. Belief was to be grounded in the material. Visitors coming from other worlds sounds a little too religious to comport with the ideals of Communism.
The horror of 2020 feels like it should have some greater meaning**, even though it is far more likely that it does not.
*The printed version ended it’s 28-year run in 2007. It has been reborn as a website, but I’d think as a virtual entity it would simply become lost in all the crazy, conspiracy-laden sites that pepper the internet.
**So here’s something that begs for assignment of greater meaning: no sooner did I complete my first draft of this article when I started seeing Twitter posts defending assertions that 2 + 2 = 5.
The term Orwellian gets tossed around quite a bit. As I recollect, it has always been thus. I’ve also always found it to be a bit hyperbolic. Could the Soviet Union, even at its worst, have been as bad as Orwell’s 1984? Does North Korea truly match that today? Does it really make sense to apply Orwell’s template to England or to America because we see some similarities?
Even the worst tyrannical regime on the earth today has subtleties and variations that make it difficult to view them in absolute black and white. This is part of what makes literature, in general, and George Orwell’s dystopian writings, in particular, so important. This distinction became came clear to me this past weekend. Perhaps you saw what I saw. The editorial staff of the magazine Nature penned a mea culpa for engaging in malquoted oldthink. The lead paragraph explained Nature‘s unfortunate situation,
When the World Health Organization (WHO) announced in February that the disease caused by the new coronavirus would be called COVID‑19, the name was quickly adopted by organizations involved in communicating public-health information. As well as naming the illness, the WHO was implicitly sending a reminder to those who had erroneously been associating the virus with Wuhan and with China in their news coverage— including Nature. That we did so was an error on our part, for which we take responsibility and apologize.
If we look at that paragraph without context, is it so bad? Not at first glance. The editorial goes on to explain that in oldspeak it was common practice “for viral diseases to be associated with the landscapes, places or regions where the first outbreaks occurred.” They give a couple of examples that many adults would be familiar with. “Middle East respiratory syndrome,” for example. The “Zika Virus” was named after a forest in Uganda. Self-referencing a bit, “Ebola” is the name of a river near the village where one of the first identified outbreaks occurred in 1976. The river was specifically chosen to destigmatize the name of the village where the disease occurred and to avoid the name of the individual, the village school’s headmaster, who first contracted and subsequently died of the disease. Nature explains that in 2015, the WHO issued guidelines whereby newly-identified* diseases must no longer use, among other things, geographical identifiers in their official names.
So far, there is some logic to this – but we can’t look at it in a vacuum.
As the above-quoted editorial suggested, some “news coverage” had been “associating the virus with Wuhan and China.” Not surprising, because the initial human-to-human transmission occurred in China (definitely) and almost certainly in Wuhan. Newsworthy? One would think. However, we are told, doing so was “erroneous.” Further down we come to understand that Nature is very much concerned about the politics. “As countries struggle”, we learn, “a minority of politicians are sticking with the outdated script.” Oh, the horror. And who are these doubleplusungood politicos who would shit all over the suffering of the most miserable nations of the world? So that nobody is unclear, a trio of ungood politicians is named.
US President Donald Trump has repeatedly associated the virus with China. Brazilian lawmaker Eduardo Bolsonaro — the son of President Jair Bolsonaro — has called it “China’s fault”. Politicians elsewhere, including in the United Kingdom, are also saying that China bears responsibility.
Now we have named our enemy and we can take up the sword of righteousness.
If you read the “conservative” press, you’ll find a particular angle from which they raise their objections. You see, part of what’s wrong with this implicit defense of China from politics is that China, themselves, are also politicizing the pandemic. The unpersons whom we have not named in Britain are correct – China does actually bear some responsibility here. Perhaps some responsibility in the original transfer (see footnote), maybe, but definitely some responsibility in the suppression of information and outright misinformation as the virus began to spread within and then beyond China. In other words, they see in this apology active support for the Chinese propaganda campaign that is currently in progress. However, this is an argument that isn’t going to resonate too far outside of the pro-Trump crowed.
To the authors, yes it is about Trump, but just not in that way. In not following the WHO guidelines to the letter and beyond, they inadvertently gave aid and comfort to the great orange devil. This is the sin for which they were required to prostrate themselves on the alter of public opinion. It’s unlikely that simple malquoting would require such a penance.
But back to my original point. Let’s forget the politics and stick with the literature. When Orwell created his dark future, it was one that was unequivocally bad. Nobody, but nobody, could “yes but” over the loss of freedom of conscience and the suppression of human nature. Furthermore, his book quickly became a standard within modern, liberal democracies. Each of us among the enlightened citizenry had read about and, presumably, could identify his indicators of totalitarianism and the ensuing destruction of humanity.
OK, maybe not each of us. We can’t expect everyone in America, or England, to have read and understood every canonical novel**, not to mention their implications. However, for the editors of a major magazine, one would expect them to be particularly literate, particularly well-read. Is it really possible that someone whose living is earned by writing could say “I’m sorry that I ever erroneously associated this virus with Wuhan and China when I wrote that it originated – uh somewhere, I forget where” without remembering their reading of Orwell?
I’m sorry we ever said we were allied with Eastasia. Oceania had always been at war with Eastasia, and stating otherwise was wrong. That we did so was an error on our part, for which we take responsibility and apologize.
*Interestingly, they also warn against unduly frighting language when indicating that a disease is, in fact, new. “Unknown” would be forbidden where as the more-academic sounding “novel”, one supposes, is quite alright.
I was reading an article about the cultural tastes of the older generations. At some point our taste (in music, in the specific article) calcifies and we prefer to listen only to the sounds that we enjoyed in our youth. This being so, most of us have a moment in time that we see as defining. We see the events of this moment as shaping not only our own lives but the culture in which we live. While the feeling is universal, that actual moment will vary, being different for each of us. I believe the article called out the late-teens or early-twenties as the age for which we develop this nostalgia.
At the risk of dating myself, I find the years 1984 and 1985 to be particularly defining. First off, it helps that George Orwell not only placed his dystopian novel in the year 1984, but titled it the same. That left all of us in the first few years of the decade talking seriously about the arrival of this dystopian future and the extent to which it had already come about. The year saw a remade version of the film (same name) come out while the novel was on the required reading list for many a school.
I’ve read the book before, but it was a long time ago. My perspective likely drifted since that reading and I probably would need another go-through just to remember how the novel relates and differs from the film experience. The novel’s copyright expires January 1, 2021, less than a year away and counting down. Maybe I should read it again once the content, itself, is free from government control.
Up until I started watching, I was sure that I had also already seen the 1984 version of the film. After having watched, I’m pretty sure I haven’t actually seen it. I probably intended to when the film was released and somehow that aspiration turned into a vague recollection that I already had.
The book is a product of the desolation and destruction left behind by the Second World War. Orwell did not believe that free, Western Democracy would survive the war or, once the war ended, the ensuing Cold War. The book specifically reacted to the Tehran Conference and the notion that Allied victory would see the post-war world divided into zones of control. While partially correct (the world would see multiple wars fought across the post-war demarcations, such as in Korea and Vietnam), the full-scale war between superpowers never took place. One might also wonder how much Orwell’s own struggle with disease (tuberculosis) helped create the tone of utter despair within his novel.
Volumes have been written about the book, its context, its predictive ability or lack thereof, and its implications for the present society, whether that be present-day 1984 or present-day 2020. I’ll mostly stay away from that bigger picture so as to focus on this movie version.
At roughly this same time, 1982 had seen the release of Blade Runner and its imagery and style were redefining what the future might look like from 1984’s present. Oddly enough, that future is now the past. At the time, though, it seemed entirely plausible that we’d be living in an over-populated, corporate-controlled hell where our western culture had been almost entirely swallowed by the dominance of all things Japanese. Conflict with the Soviet Union was anticipated to follow paths as in Red Dawn or The Hunt for Red October (both 1984).
By contrast, the imagery of Nineteen Eighty-Four (the film), is the standard “war-torn Europe” pastiche. Was that a plausible future to us in 1984? Yes, if for no other reason than it was common within this exact context. Compare, for example, with the imagery of Pink Floyd – The Wall, also a 1982 film. It all fits the then-current archetype of World War III, conventional war version.
What about today, in the eyes of the “average” entertainment consumer? No world where progress was brought to a near-halt by the Second World War can possibly feel “futuristic” today. Nor can a story where Soviet and/or Nazi totalitarianism simply became ubiquitous. Even the imagery is all wrong for “perpetual war,” a state that we should feel quite familiar with. The fact is, the vision of a “war-torn Europe” has passed almost entirely out of human memory. There are very few still living who experienced the Second World War as an adult. Even the number that were old enough to have coherent memories of that time is rapidly dwindling. Thirty-six years on, the film has become a historical piece – depicting something from the 1940s – rather than the futuristic warning that it was intended to be.
The other bit of imagery from the film that surprised me a bit was the quantity and character of the sexual content. Part of it was I probably wasn’t quite ready to “get” the book when I first read it and so didn’t properly digest the content. Part of it is that any language from a 1940s novel is not going to produce quite the same impact of an actual big-screen beaver shot.
The sexual imagery in this film dominates the screen. Furthermore, it is a kind of sexual imagery that one is not going to see in the present day. It is explicit without being “sexy.” Like the shattered buildings and unhealthy-looking populace, it adds to the sense of despair rather than distracting from it. Similar scenes in a modern film would show a lot more writhing and humping but nowhere near the anatomical accuracy. There is a new Victorian prudery at play that has redefined what is acceptable even within what is meant to push the boundaries of our sensibility.
The sexual focus is interesting in another way. As I said, I never thought of the story as primarily about sex. When I read the book, this was a facet of Winston’s “rebellion” but I wouldn’t have seen it as dominant. In the film, their revolution is primarily a sexual one. Julia (although this probably isn’t as clear in the movie as in the book) has no aspirations to revolution beyond forbidden acts of sexuality and sensuality. In Orwell’s words, she is a “rebel from the waist downwards.” Winston challenges the government’s new world order at a more intellectual and philosophical level but, as he acknowledges that he will eventually be caught and reprogrammed, his main goal is that he will retain his love for Julia no matter what is done to him. The sexual focus in 1984 is probably related to the experience of those behind the movie of the sexual revolution of the 1960s and 1970s. I see a very different context today.
I mentioned what I termed a new Victorian prudery. Sexuality, perhaps mostly male sexuality, is on many fronts being deterred. The massive depopulation of the Second World War and the even greater depopulation imagined by an Orwellian future aligned the needs of the State with the values of the religious right*; the need to build cultural and military strength with a vibrant and growing population. Thus the Ninety Eighty-Four State’s suppression of sexuality, to the 80s eye, would be seen as clearly anti-progressive. In actuality, totalitarian governments, in the years leading up to 1984 , were more likely to institute population limitations. Today, this has become an accepted part of what’s wrong with sex – it’s bad precisely because it tends to make babies.
In other words, among the Orwellian trends we are currently subject to, one of them seems to be the suppression of human sexuality – albeit in a very different direction than what Orwell imagined. This may be more profound in today’s world than then Orwell’s insight into the neuro-linguistic programing effects of political correctness. Winston is defeated when his eternal feelings for Julia are taken from him. Mankind is defeated, not when we cease to question the powers that be, but when we can no longer express our natural and inherent nature. Politics truly conquers all when politics can overcome genetics.
One final thing that surprised me when watching was the opening credit nod to the Eurythmics. I didn’t know they did the soundtrack for this film and I certainly didn’t know it was controversial. At the time, the Eurythmics were peaking commercially with their 1983 release of Sweet Dreams (Are Made of This). Their single Sexcrime (Nineteen Eighty-Four) was not quite as successful in the U.S. as it was in the UK, but it was popular in dance clubs. I never associated the song with the movie. Indeed, why would I? While I now connect Annie Lennox with movie soundtracks (courtesy of The Lord of the Rings), this would have seemed an oddity in the 80s. Furthermore, the Eurythmics’ electro-pop sound does not fit at all with the dystopia past/future of Nineteen Eighty-Four. This was a significant problem for the director. While the film’s production company, Virgin Studios, had commissioned the Eurythmics to do the soundtrack, director Michael Radford felt that the dance pop sound did not fit into his artistic vision. He had scored the movie, both with songs and musical background, with the compositions of Dominic Muldowney. Late in production, Virgin inserted Eurythmics recordings in place of the orchestra score. Virgin also released an Eurythmics album, which they sold as the soundtrack to the movie. Muldowney’s songs still remain prominent in the film, particularly his creation of the nationalistic anthems for Oceania. Radford and Muldowney were very unhappy with the decision and said as much.
In the intervening years, rights have shifted and the past has been rewritten. In 1999, a new soundtrack CD was released. It was titled Nineteen Eighty-Four: The Music of Oceania, and contained Muldowney’s score only. At that same time, the film was re-edited to restore Muldowney’s “cues” where they had been replaced with Eurythmics electronica. Currently, DVD versions might contain both edits** of the film. I’m not entirely clear about which version I watched. I think I probably saw the Eurythmics version in that I was thinking that some of the mood music was a little weird. I’ve read that the streaming version uses the Virgin cut, but Amazon’s interface does not readily provide those details. I guess the lesson is that, whichever version I saw is the right version and it always has been the right version.
Now, when it came to music, rock and pop, this mainstreaming of the newest sounds sometimes seemed like a wild exception rather than the rule. The early 1980s was heavily dominated by the artists of the previous decades. Look at the chart-toppers of 1984 and 1985 an you’ll see (if not the intact bands, at least the members of) The Beatles, Yes, Kenny Loggins, Genesis, The Commodores, Stevie Wonder, Ike & Tina Turner, King Crimson, REO Speedwagon, Dire Straits, and Jefferson Airplane. One would be forgiven for thinking it was still the early 1970s. At the same time, many of the bands that I would think of as “the future” of rock were either releasing their first albums or coming into their own commercially. In fact, looking at the musicscape of 1984 and 1985, I see several iterations of musical future as my own taste evolved.
Enjoying so this whole timeline-generating process and having been pleased with the music-and-events format I indulged in for 1968, I decided to make one for 1984. In doing so, I realized that the events I so strongly associate with this time are actually spread (mostly) across 1984 and 1985, so my musical timeline covers a two-year span.
One more thought and a caution to readers.
As loaded up my timeline with so many music videos, I realized I was pushing against some limits, at least in terms of the way my timelines interact with Firefox. The way the Knight Lab software works, it is loading the multi-media into the browser memory to display it. With dozens upon dozens of music videos involved, computer resources quickly become overloaded. This seems to persist even after closing the tab containing the timeline – one needs to shut down Firefox altogether, a action that itself becomes difficult due to the huge memory usage.
*I mean neither “religious” nor “right” in the American sense. The leftist wave of the sexual revolution attacked a prudery that, as in Victorian times, was closely tied with Christianity. In the modern world, the religious can fall on either side of the political divide (fundamentalist Islam is far more restrictive than almost any Christian sect). Likewise, “conservatism” here means only the emphasis on traditional norms. Non-Western traditions seem to be acceptable to the left.
**The change also removes a post-directorial saturation edit which Virgin had made against Radford’s wishes.
I grew up in a small town. Not a really, really small town, but a medium-sized suburb of a small, ruralish city. As such, I was fairly disconnected from the cutting edge-culture of my time. It might be a phenomenon that you need to live to fully understand. In some, critical ways the culture of “the 80s” that I experienced when I was younger would have looked, to someone from New York or Los Angeles, about five years out of date.
Years later, shortly after watching Napoleon Dynamite, I was part of an argument about anachronisms within that film. In some ways, the films seems to be taking place in the early 80s. At the same time, there are clearly contemporary artifacts within the film that some have felt were included in error. It hit me that there is a gag there, one that highlights that phenomenon I had experienced myself. The jest of Napoleon Dynamite is that “I went back to my home town as an adult, and it was still stuck in 1981.”
As a kid from New York City, or L.A., or even Chicago or New Orleans for that matter, you’d be exposed to the (young) adult culture of your time. In 1981, at least, growing up in rural or even remote-suburban America meant restricting one’s cultural influence to the big-three TV networks and other mass media. Decades down the road, I’ve paused to think about the message contained in that media. How biased was it? How influential was it? How pervasive was it?
As a case in point, look at the film Footloose. No, not the new one, the original. The 1984 movie was inspired by real events from 1981 where a high school class petitioned the school board for permission to hold a prom, circumventing an ordinance that had been on the books since the foundation of the town (82 years earlier). The school board took jurisdiction, stating that a prom was not the still-forbidden “public dancing” but, instead, a private function not covered by law. The reality was not quite as dramatic as the film version, but the rough elements of the tale (including church-driven objection to public dance) was transformed into a wildly successful and very popular film.
For the film, the location was moved from Elmore City, Oklahoma to a more ominously-religious sounding Bomont, Utah. Ren, Kevin Bacon’s character, is an amalgamation of class officers Rex Kennedy and Leonard Coffee (Rex and Len = Ren). And while Coffee had only come to Elmer City in the sixth grade, he was hardly the too-cool-for-Utah Ren from the big city of Chicago. The personal drama that loss-of-innocence lead-female Ariel experienced, having recently lost a sibling in an drunk-driving related crash, seams entirely unmatched by the class officer and daughter-of-the-school-board-Chairman Mary Ann Temple, whose father actually cast the deciding vote in the prom’s favor.
Set aside for the moment that, John Hughes films notwithstanding, Chicago is barely better than flyover country to the denizens from New York and L.A. Is an intended message of this film to show how backwards American is outside of the major urban areas? Can the only salvation for the hinterlands come when someone like Ren comes and brings with him modernity? This does seem to be a message that was reinforced by television, film, and music nearly everywhere when I was young.
By the time I was a college-aged, I had heard that message loud and clear. I was eager to escape my mildly-rural roots and only considered employment in top-10, coastal urban centers. Disdain for the vast rural center of the country was high despite a lifetime of without having experienced any of the repressive culture that the media assured me was pervasive. I just knew that cultish, religious extremists awaited me if I ever lost sight of the ocean in my rambles.
Only after getting old did I begin to value affordable housing and quiet, open space. Living outside the dome, I can see the mistakes in the shared prejudice of the urban elites*. Those people from “nowhere” that are less wealthy, less educated, and less cosmopolitan than you aren’t less intelligent or less informed. In fact, my sense (and this is born out by recent studies) is that it is the opposite. In the vast red fill of the United States, a conservative still is bombarded with the progressive viewpoints that pervade our culture. If he likes Trump, he does so hearing, on an almost daily basis, how awful Trump is. If you work in a Manhattan law firm, on the other hand, your protective bubble is nearly impervious.
If you accept that the portrayal of the bulk of America as being populated by closed-minded, bigoted, uneducated morons is more than a bit unfair, the question I have is, was the misdirection intentional? I’d be willing to state with confidence that there has been a concerted effort to push this country out of its “traditional” mindset. In some cases, that has been a good thing. I think we all can appreciate the reduction in racism and sexism that has taken place over the last century and acknowledge that some of that was accomplished as a top-down effort. In other cases, folks passionately take sides as to whether the “new” is really an improvement over the “old.”
It is certainly possible that it was unintentional. Writers and directors tend to live in New York or Los Angeles and their own world-view is bound to make its way into their creations. Similarly, many a generation of teenager longs to get away from the little place they grew up to go see the world and it would be inevitable for this message to make its way into the songs and other entertainment of the young and for the young. Until very recently, my take would have simply been one of art imitating life.
I do wonder, however, if the push isn’t a little too one-side, a little too profound to be accidental. Go back about a century, for contrast. I was always struck by the sappiness in the film version of The Wizard of Oz. The film extols the virtues of country living, family and friends, and a little place to call home in a way that seems artificial. The creators of that piece were also from New York and California, but they seem to feel compelled to speak to their potential audience, an audience who populated the vast middle of the country, in a sentimentality that goes far beyond their source material. The films of the 80s and 90s would rather ridicule their suburban audiences, and in a way that was no more natural than The Wizard‘s sound-stage countryside.
Ironically, the progressive left needs to rewrite the narrative yet again.
When I grew up, “the fifties” was an epithet to rival “small town” in terms of explaining what was wrong with America. Before the Summer of Love and the Sexual Revolution, we were told, America was stifled by rigid conformity imposed by corporate overlords. The hippie revolution preached the need to free oneself from “The Man” through a back-to-basics, do-it-yourself spirit that would live comfortably among today’s right-leaning preppers. Throw off that tie and starched, button-down shirt and be free.
Those fifties were, in some ways, the height of America’s cultural power. We had emerged victorious from the Second World War with nearly all our potential competition in international trade either having been defeated or, at the very least, devastated by that war. While America’s orientation toward free enterprise continued** to drive our successes, the unprecedented economic explosion and resulting world economic hegemony should not be discounted. For the generation that includes several from the upcoming crop of Presidential candidates, those post-war decades appear to be the baseline – that which existing before any intervening political, cultural, and economic changes took place. In that context, one can easily project one’s own values on this success and the fall from grace. Just as I have given credit to economic liberty, someone else might cite the labor movement and the New Deal as the primary factors creating the 1950s successes and, conversely, the drift away from America’s socialist experimentation as causing the end of our economic domination of the world.
Bernie Sanders is on record making statements to this effect, although I think I’ve read the reference before it came from his mouth. In his case, I wonder if it is his age showing. From the point of view of Bernie’s generation, he is turning the values of his opponents to his advantage. For a young Bernie majoring in political science in Brooklyn, his “moral majority” opponents would have been, figurative speaking, “living in the 50s.” To him, perhaps, those to the right of him politically still are. However, does what, to him, looks like brilliant strike in the cultural war instead appear, to most of us alive today, like he’s fighting a battle that was over and done before we were even born? Nostalgia for a better time is an extremely powerful factor in politics. It’s bound to be a winning strategy, but the nostalgia must exist of its own accord. Bernie can’t manufacture something that isn’t there.
Or maybe I’m just taking a silly, pop-culture movie way too seriously. There’s nothing wrong, after all, with a little bit of dancing.
*Although, to this day, I find it impossible to rectify how that Alabama man with the Gomer Pyle accent was one of the world’s top rocket designers.
**America’s role as the “arsenal of democracy,” itself, sprung from our traditions of economic freedom. I would argue that this, itself, was a significant factor in allowing us emerge triumphant from the war.
In time for another attempt at a Dune movie version as the 1984 version has appeared on streaming.
Sometime, those things that are remembered fondly from childhood don’t stand the test of time. When I was wee tyke, I loved Speed Racer. My parents wouldn’t let me watch it (pinko-commie propaganda*, my Dad called it) so I had to sneak over to my friend’s house after school to view the episodes. -This is all very traumatic, so excuse me if I don’t go into all the details.- Sometime circa 1990, I found a load of VHS tapes on deep discount at the local Blockbuster** and one was a set of Speed Racer episodes. I bought it and watched it. It was awful. Horrifyingly awful. I’m sorry I watched it and wish I would have let my romantic memories of the excitement and beauty of Speed Racer live on. To this day, I have not watched a single episode in YouTube.
So what happens when something is remembered not-so-fondly? Does one dare to revisit the dark times?
If I had to guess, I’d say I read the original Dune novel in the late 1970s (the book was published in 1965, less than year before the Speed Racer comic was first published and a little more than a year before the Speed Racer TV show). In retrospect, the film would have looked like a dream project. Frank Herbert’s book is one of the classics of the science fiction world. The film, being written and directed by David Lynch, would have been a must-see, particularly after the appearance of Blue Velvet (1986). Likewise, the cast was a wild assembly of actors that should have been a real draw. Kyle MacLachlan was unknown at the time. Cast as MacLachlan’s nemesis, Sting was not. So how did it go so wrong?
David Lynch was offered the opportunity to direct The Return of the Jedi, which was released in 1983. Lynch wanted artistic control over his own project, not to be working under the shadow of George Lucas. Thus, Dune was to be Lynch’s own Star Wars. Recently, actress Virginia Madsen (who played Alia, the daughter of the Emperor, and, in a post-filming edit, narrated much of the movie) said she was signed on for a trilogy. Oddly enough, at the time Lynch was brought on board, he hadn’t read Dune nor was he a fan of science fiction.
Star Wars itself was, by Lucas’ own statements, influenced by Dune. The similarities may seem superficial and, as a matter of fact, I hadn’t noticed them before now. From the desert planet that becomes the focus of an empire to Princess Leia/Princess Alia, the comparisons jump out once you are looking for them. Apparently, they become more and more obvious if you read early versions of the Star Wars script, where the Star Wars universe was more filled out and bore a resemblance to Herbert’s future-feudalism. A serious take on Star Wars using the giant of science fiction literature seemed just what the doctor ordered.
When the film came out, it was savaged by critics. As bad as the film may have been, I always felt many critics went a bit overboard. I remember reading a criticism of the way half the dialog is made up of characters inner-thoughts. At the time I thought this was merely the script-writer having been a true fan of the book***, as this dialog pattern (and many of the details) are moved directly from book to movie. Other criticisms were dead-on. Viewed in 2019, the visual effects are god-awful, but they were bad even for 1984. How this happened is a mystery. Dune had the budget of Return of the Jedi as well as the Star Wars trilogy as a bar for what space-epic special effects should be. The battle scenes are wretched, often showing groups of a few dozen extras rushing from one side of a soundstage to another, popping off pretend shots at an unseen enemy. The screen-time blown on these pointless action scenes has to be made up by jarring cuts in the narrative part of the story, glued together by voice-over narration.
As the film was developed, the running time was one major bone of contention between Lynch and the studio. Lynch’s rough cut ran nearly four hours and his intended cut was project to be more than three. This was in line with earlier attempts. Director Alejandro Jodorowsky’s work in the early 1970s was looking like it would come out in the 10-14 hour range. Herbert’s own script was estimated to be about a three-hour version. Studios, however, wanted a 2-hour-or-so length that would fit neatly into their distribution models. The result was the two-hour and 17 minute version that went to theaters. This is also the version I just watched. An additional 50 minutes was added when the film was brought to the television, but this was a studio edit and not some kind of “director’s cut.” Lynch himself has said that the whole experience of this film is too painful to revisit and he has never had any urge to restore it to something closer to his vision. In any case, he says, he knew that he would not get the final crack at editing and so he says his compromises are part of even the raw footage.
There is a good chance that the first time I watched it, I watched in on TV and saw the longer version, though I don’t remember. This may even be one reason why rewatching has shown me a film that is worse than my memories of it; the cramming of the story into too short of a running time may actually have been absent from my own original version. The special effects are also worse than I remember, but that’s to be expected.
Going back again to 1984, the studio was geared up for a merchandising bonanza to rival Star Wars. Based on the talent of Lynch (The Elephant Man, written and directed by Lynch, released in 1980) and the success of the novel, previews were positive and plans were big. Toy stores were populated with action figures, toys for boys, and even a new strategy game. All for nothing. Ironically, despite the film being such an obvious and well-known financial failure, of all David Lynch’s films, this was the biggest initial-run earner and was the number two film (Beverly Hills Cop was #1) in its opening weekend.
But let’s go back to that strategy game. A new game was developed and released through Parker Brothers specifically to tie in the with movie. Surprisingly, given its origins, it doesn’t look all that terrible. The game pieces are the major characters from the movie featuring the actors’ likenesses. You move those pieces around one of two tracks; one to build character strength and one to accumulate resources, both in service of fights-to-the-finish that will occur when two opposing pieces land on the same space. There is some strategy involved and the components appear to look pretty nice. On BoardGameGeek, it clearly outranks (for example) another 1984 game of similar look, The A-Team. Even with such praise, however, it can never hope to be more than “the other Dune game.”
This is because in 1979, Avalon Hill released a Dune game that, surprisingly enough, retains a very high player rating (again, using BoardGameGeek) to this day. In fact, it is essentially tied for the 5th best Avalon Hill game of all time – with a nearly identical rank to Advanced Squad Leader and Civilization. It may even be possible that I bought the Dune game first and then bought the book as a follow on to the game, as opposed to the other way around. That makes the very high score even more surprising personally – I’ve actually played the game. Recalling impressions from as much as 40 years gone, as difficult as that may be, I didn’t think of it as one of the best Avalon Hill games ever made. As I remember, I wasn’t entirely impressed with the game’s ability to immerse one in a feeling of reliving the novel. Compared to my other Avalon Hill games, it didn’t seem to have much going for it as a wargame. What I had never done then, nor have I done it since, was to play a large, multiplayer game. One would imagine that the best Dune session would have one player for every faction present in the game, six in total. Back in the day, I was lucky to get a second who wasn’t a younger sibling.
Now that a new version of the movie is coming out next year, there is bound to be a resurgence in all things Dune. Like several tries before, the proposed cast for the movie looks excellent and director Denis Villeneuve certainly has some success under his belt (Sicaro) as well as some warning signs (e.g. the fine-sounding but unfulfilling Blade Runner 2049). Naturally, some enterprising concern grabbed the boardgame license and has re-implemented version of the classic due to come out before the movie hits or misses. Computer titles will almost certainly be in the offing, particularly considering the importance of Dune II in the development of the RTS genre.
There are a range of other Dune spinoff products, although I’d say considerably fewer than one would expect given the popularity of the source material. I watched the mini-series when it was live on TV. At the time, I thought it wasn’t bad for a Sci Fi channel production and perhaps a little better than the Lynch film. There were a few games besides the Avalon Hill and the Parker Brothers versions. There was an RPG that got crushed during licensing machinations. There was a collectible card game (1997) and a more recent print-and-play dice-based game (2015). For the computer, there was Dune 2000, a graphical remake of the classic Dune 2 – perhaps a little to much “re” and not enough “make.” There was a disaster of a computer adventure/action game based off of the mini-series. Last but not least, as I just found out, there was a total conversion mod for Civilization IV.
The Dune mod is built upon the Civilization IV: Beyond the Sword version, which itself seems to have gone mostly by the wayside, despite still having some of the most interesting mods and scenarios yet done. I downloaded the Dune Wars: Revival version of the mod, a 2015 rework of the original (called Dune Wars). The setting is that you find yourself on Arakkis post-apocalypse (of some sort). Each of the major entities (including a few beyond the standard houses) has the Settlers unit, poised to found a new colony. In other words, it’s a Civilization game.
The standard features of a Civilization worlds are converted over to the Dune universe. So there is the desert, impassible to many of the ground forces, rather than ocean. Water replaces food as the critical resource to sustain life. In addition to barbarians (the black flagged force in the above screenshot) the landscape is crisscrossed by sandstorms and marauding worms. The backstory is that, in addition to all Arrakis development being destroyed, connectivity to the greater universe has also been cut off. Domination in this game means reestablishing control over Arrakis and reconnecting to the intergalactic trade or transforming the planet into the water-filled paradise of the prophecies.
Workers upgrade the land but, substituting for the usual developments, you must use technology to harvest water and (of course) Spice. Shown in the above screenshot, workers have ventured out into the impassible desert to construct spice harvesters. They look better on a live screen, as they scuttle around picking up spice and spewing clouds of sand. It is a cool upgrade visually and some of the specific substitutions are inspired. As a means of reliving the book, again not so powerful. Of course, is this really any worse than Dune II and trying to portray the novel as an RTS?
I fished around a little bit in the Mod’s menus. I guess I really didn’t expect to find it, but it seems like an interesting direction to go with this would have been to build a world set up for the book’s opening. The fact that it isn’t done suggests it probably isn’t doable. I can imagine quite a bit of work (both map design and scripting of events) going into a project but producing something that isn’t any more engaging than the random-map/start-from-zero version that already exists in the mod.
We’ll get to see the newest try at the movies next year. It is possible that the book and the greatness within it simply cannot be translated to another medium, no matter what kind of resources and determination you have available to you. It seems like there is a movie, or a game, or a TV series waiting in here somewhere, but I sure can’t say what those magical missing features are that would make it all work. Or maybe…
Until I started writing this article, the connection between Star Wars and Dune was not at all obvious to me. Now, it is the most obvious thing in the world. Star Wars didn’t exactly bring Dune to the big screen but maybe it took what could be taken and came close. Had Star Wars been more serious and a little darker, it may well the version of this story we credit with getting it right.
*To be honest, when I watch some of the 70s cartoons today, I think he may have been on to something.
**How old do you have to be so that last sentence doesn’t sound like complete gibberish?
***Thus my surprise, again very recently, to learn that David Lynch was not a fan of the book. One must assume that this was simply his take, upon reading the book, of how to put it onto the screen. Herbert, himself, was pleased to hear much of his dialog survive the move to film intact. In fact, Herbert has speculated that a major problem is the cutting of the films running time, leaving necessary scenes out of the final product.
Recently, I was trying to find some information about the various adaptations of The Shining, and I came across an offering of three Stephen King made-for-TV adaptations in one DVD package for an almost-reasonable price. In addition to The Shining, the set contains the Rob Lowe version of ‘Salem’s Lot, and It. I’ve watched all three of them before. The Shining is a much more faithful adaptation of the book, which is why I want to watch it again. Salem’s Lot pales in comparison to the 70s version which, itself, didn’t really do its source material justice. I’ll probably want to watch them both fairly close together so as to compare on a fair basis. As to It, well, until recently it had the advantage of be the only adaptation of the book out there.
I read the book shortly after it came out in paperback. I can’t remember exactly when but, in any case, definitely before the original, TV version of It was aired. As a fan of the book, I made a point of catching the two-part movie (miniseries?!?) when it was on. I can’t say I was entirely thrilled.
But first, something not about the what and how, but about the when. The story ITself (ha ha, right?) is about an ancient evil that awakes every 27 years (give or take*) to feed on the fear of its victims, preferably children. The “present” of the book is the fall of 1984 into the summer of 1985.
Whenever a story is updated, the author/adapter often feels the need to update it vis-à-vis current events. A story about terrorism written before 2001 would feel like it was ignoring something important if 9/11 wasn’t subsequently included, wouldn’t it? Stephen King gives us fine examples of both what to do and what not to do when he rewrote The Stand to take place after AIDS. This seemed to be necessitated by the fact that the book is to take place in the “near future” (the original hardcover was published in 1978 to take place in 1980s). Does it make sense to write about a “near future” that has already long passed and, obviously, didn’t actually happen? Or is it better to keep moving your near future forward, while you’re at it, so the reader (at least the ones who have rushed out and bought your revised book as soon as it came out) still gets the sense that your possible future remains possible?
I guess it depends.
In any case, when the made-for-TV-movie treatment came out four years after the book, It, was published, the narrative was advanced five years, to still take place in “present day.” Again, it probably felt more natural to engage the viewing audience with their own “near future.” It also avoids anachronisms. A story set four years in the past has the problem the writers know what happened in the intervening four years but the characters don’t. Maybe not a problem, but putting the characters and the audience on equal footing feels natural.
Even at the time, the mini-series was somewhat disappointing. Part of it was the gap between “made for TV” and “movie” budgets, circa 1987. These days, we expect our TV “events” to look polished. Not so much in the late 80s. Even by those standards, however, watching a badly-done stop-action monster fighting on screen felt a little off. With its TV origins, this production had a strike against it, although it wasn’t entirely its (or Its) fault.
Second strike is that it is a Stephen King novel. With one or two notable exceptions, converting Stephen King’s material to film has not worked out well. The Shining is often seen as an exception although, having just read Dr. Sleep, I now know that King didn’t much care for Kubrick’s interpretation. Its success has more to do with the Nicholson/Kubrick vibe and, in that regard, it is unlike any other King-derived film. Stand By Me is my personal pick for the exception that makes the rule. Based on a King short story, the movie is both faithful to the original and exceptional in its own right. I’ll state that, for what its worth, there are plenty of adaptations that I haven’t seen. Although, to a large extent, this is because so many of them are so bad. The King stamp on a movie frightens me off and not in a good, Stephen Kingly sort of way.
I’ve thought a lot about this phenomenon. For me, I think the answer is that Stephen King’s writing is extremely visual. I talked about this before when it came to his ability to describe the indescribable, and this is part of it. It also applies to simply to his ability to create an image in the readers mind, whether that be the look of Randall Flagg, the creepiness of a House, or just the view down an empty highway. When his writing does so well in using the imagination to paint a picture, translating that to two-dimensional images on a screen will invariably fall short.
With regards to It, there is a similar failing in translation. For example, King uses the repetition of certain words and phrases to help establish the alternative world in which his stories take place. It works well within the books. It works considerably less well when written in as dialog in a screen adaption. In the book, a phrase like “Beep-beep Ritchie” implies a long history between old friends. They’ve known each other for so long that they’ve established their own language when they talk to each other. King lets us see this history by showing them using that language. It illustrates the depth while avoiding creating that depth (extensive portrayals of their relationships before the narratives of the book) or outright describing it (“the children had been friends for so long that they…”). However, when phrases are put on the screen “Beep-beep” or “We all float down here,” they fall flat. Why? Part of it is the shortcomings of the screen relative to the imagination. Part of it is a lesser ability of an on-screen portrayal to create that depth, that history in the way that books can. Part of it may just be bad acting. Or maybe the key is that what makes a good combination of dialog and acting on screen is very different from what comes off well in a book, all of which is very different from what would seem natural if encountered on the street between real people.
Folks often refer to Tim Curry’s portrayal of Pennywise as a strength of the original It miniseries. I find it telling that his best lines (“Kiss me, Fat Boy!”) were written entirely for the screen whereas the signature lines from the book (“Beep-beep” and “float”) come off as awkward. Unfortunately, its the awkwardness that dominates. Again, the production isn’t that out of whack for 1980s made-for-TV, But it struggles more as it ages.
Produces and directors sure saw where 80s It failed. These very reasons must have contributed to seeing the remaking of the movie as a feature film as a good idea. If nothing else, top notch special effects using 2017 technology should be a huge improvement.
Once again, of course, we have to shift the timeline of the story. The “past” episodes of the narrative now take place, as a matter of fact, in between the two presents of the original versions, in Fall of 1988 through the summer of 1989. The new film makes less of an effort to track the book but, even setting that aside, there is little** that shifting the story forward some 30 years impacts. The story does seem to suffer a bit, even when compared to the miniseries. Perhaps the strength of the special effects detracts from the human angle of the story. Especially in the book and carried over into the miniseries, the monster plays on the weakness of each individual child, requiring individual character development. A theme is that the group can achieve together the triumph that, as individuals, would be a failure. As the new film focuses on special effects, with longer and more visual horror scenes, it leaves less time for setting up the characters.
There is also the big shift in the presentation. The original miniseries, like the book itself, tells the story by alternating between the past and the present. In the miniseries, the first “episode” introduces the adult characters and has them remembering their encounters with the clown as children. So by the halfway point, we know both their young and grown selves. The new movie is also filmed as a two-parter. The first movie, and the only one out as I write this, focuses entirely on the children. The adult characters are not present in any way. In fact, the story stands alone. Even if the second film were never to be made, except for those of us who know the original story, the audience wouldn’t feel like they’d been left with half a tale.
Another difference that struck me in particular was the ages of the characters. I don’t recall if, in the miniseries, the ages of the kids are made explicit. I do know that the ages of the actors are around the 11-year mark and that matches the explicit ages within the book. In the new film, there are several clues that the children are meant to be older. First, they look older, as the actors are in the 14-15 range. It is clear that the characters portrayed are also older, although again I don’t recall their ages being explicitly stated. Henry Bowers has a car, which makes him at least 16. His victims need to be at least close to his age, high school at a minimum, to make his bullying of them plausible.
They also don’t act quite like 11-year-olds or even 14-year-olds. The boys emit a constant stream of obscenities and sexual innuendo that seems more appropriate for freshmen in college. While I’m the wrong age to have “been there, done that,” my own memories would suggest that boys aged 11 (or even 14) would not have a ready sexual joke for every occasion, particularly not “the losers.” A year or two later or maybe some locker-room mentality might make a difference, but that language seems rather out of place for the characters that speak it. It also may be that, despite setting the film in the late 1980s, they act more like kids of the twent-teens. Is it also possible that the kids of today are that much different, perhaps vulgar, from kids of the 80s? Maybe. Look at Freaks and Geeks or even Stranger Things for a much more plausible portrayal of boys at this age and in this age.
The older actors and older characters also change the tone of the attraction/romance component of the story. The new Beverly, the actress, was 15 or 16 at the time of filming, an age that can look pretty adult in some circumstances. That impacts the way the various “crushes” play out, even without changing the actual situations or the dialog. Society has changed, but biology does not. Within the books timeline, Jerry Lee Lewis married his 13-year-old cousin in 1957. In 1959, Elvis met and fell for 14-year-old Priscilla. Granted, adults dating 13-year-olds was frowned up even then and, by 1985, even more so. Nevertheless, the sexuality of teenagers was at least acknowledged to such an extent that we can use an 11-year-old unpopular girl interchangeably with an attractive 15-16-year-old while blithely pretending we don’t know what we’re all on about.
On this point, I’ll go back to the book. Around the time the book came out, I remember there was something of a kerfuffle in terms of media and its treatments of the sexuality of teenagers. I wish I could recall the titles in question, but I cannot. I think it involved movies. The situation involved the transport of non-pornographic films across the Canadian-U.S. border; films that depicted minors in sexual situations. Doing so exposed the possessors of such media to severe penalties under child pornography laws. It also caused something of a moral crisis among polite society. I recall thinking at the time that the moral crusaders had some real blind spots in terms of what they found offensive and what wasn’t worth noticing. It, at the time, receiving no particular negative attention except that which Stephen King generally garners for trafficking in witchcraft and devilry. Put it on MTV, and it was an international crisis. Write something many times worse in a book (one that exceeded 1000 pages at that) and nobody notices.
You see, the book version of It contains explicit descriptions of sexual congress among the 11-year-olds. This never made it into either movie version, even in the newer one where it might be considered at least age appropriate, if not still forbidden. It‘s [again, !] a real mixed-up set of values we have, applying different rules to different circumstances. Maybe that scene could never be written today, whether for a book or a movie; that I don’t know. Maybe in a few years, It will be republished with the sexing stuff removed. Or maybe we’ll continuing as we always have; feinting in horror at one reference while accepting it as mundane in another context, or perhaps even edgy somewhere else. In this, I suppose, the new Victorians aren’t all that different than the original ones.
*My hyperlink takes you back to an earlier post where I discussed the synchronization of the timelines in Stranger Things, Dark, and Back to the Future. It‘s present day, as well as its 30-year-or-so cycle, fits in well with this theme. The death of Georgie Denbrough likely comes very close to the November 12th, 1984 date that seems to repeat. In this case, however, I’ll not accuse King of paying homage to Back to the Future. I think the key here is that the Losers’ Club of the story, and in particular famous horror author Bill Denbrough, are exactly the age of King himself and when their characters become adults, they are returning take on the monster at the age of King, himself, the age at which he is writing the book.
**Maybe one, just because thinking about it made me think about it. A key factor in the original stories (and miniseries) is that Eddie’s mother forces him into hypochondria and insists that he treat himself with ineffectual asthma medicine as a means of controlling him. The “prescribing” of a placebo inhaler makes sense in a 1950s plot. Not so much in 1989.
Much as with The Accursed Kings, I feel cheapened by the way I found the TV series Dark. It was recommended by a Facebook friend as a new Stranger Things. Albeit, he said, a more “adult” version.
The similarities are there. Even if that worm hadn’t been put in your ear, if you were a fan of Stranger Things, you would probably immediately start to make a connection. Dark is a show set in the 1980s*, with a number of the main characters in their teens and younger. The show has them attempting to unravel the mysteries of and then save themselves from some kind of supernatural happening caused, or perhaps controlled by, a nearby government facility. Finally, of course, they were both created as Netflix Original Series.
To view this new show that way is to cheapen it (and to cheapen myself, having only been drawn in with the gimmick). Dark is no knock-off of Stranger Things. Beyond the similarities stated above, it is a very different story and experience. Dark is a German-language series created entirely for Netflix distribution. Fortunately, when it came to the language, forewarned is forearmed. Netflix supports user-selected combinations of language and subtitles, so I was able to reconfigure my options right at the beginning to use the original dialog subtitled in English. I firmly believe that re-dubbing film in different languages detracts from the original’s quality. I want to hear what the actors are actually saying, not substitute actors reading some translators lines, none of whom are the original artists.
Dark a show with solid acting, well done mystery elements, and plausible science fiction. I was also particularly pleased with the soundtrack. The way the show uses songs helps to orient the viewer relative to the time-travel aspects as well as occasionally hinting at deeper meanings in the plot. It is also a nice mix of classic American/British 80s music with (perhaps equally well known, if you are German) German language songs. This is a well put together series from end to end. I’d be pressed to identify any weak links.
One last bit of similarity between the shows. November 12th, in each of the years of Dark‘s story, is the key and final date of the supernatural occurrences**. In Stranger Things, the climactic shows of season 1 all take place on November 12th, albeit in 1983. In Back to the Future, November 12th (1956 this time) is the date of the dance and the night that the town clock is struck by lightning. The connection hadn’t occurred to me before, but it seems to me that both shows were deliberately referencing the Back to the Future timeline with the dates, if not (more roughly) the years. The “present” of Back to the Future (1985) sits between the two presents of those series.
I’m sure its obvious, but I’m glad I watched this one. As much as I complain about the Stranger Things -based marketing, I have to admit that without some kind of hook, I would probably would not have chosen to take a chance on a German-language show, particularly when it is Netflix streaming -only. In the early days of Netflix original content, they had an extremely high batting average, but these days I’ve come to see the “Netflix Original” tag as a somewhat negative indicator. Without the ability to look up ratings and reviews on other sites (and, for me, the Netflix DVD rankings still seem to be the most accurate indicator), you’re stuck going out on a limb.
This time the climb was worth it.
*By even trying to discuss this, I risk ruining several of the “reveals” of the show. Don’t read on if you’re planning on a viewing experience that remains uncorrupted. The show starts out in 2019 and for some time, while there is reference to happenings in 1986, remain set firmly on that date. There are children who have gone missing in the here and now of 2019, despite the foreshadowing of links to the past. Eventually the show makes the better case is that 1986 is the “present” and 2019 is the “future” (of course it is, right? It’s only 2018!).
**Yes, I’m trying to be deliberately vague.
I looked at a number of things over the past weekend. They all seem to me to fit into one grand pattern.
In The Wall Street Journal was printed a review of Conservatism: An Invitation to the Great Tradition by Roger Scruton. The reviewer is Richard Aldous, a professor of British History at Bard College and an author of works of his own on conservative themes (Reagan and Thatcher: The Difficult Relationship and The Lion and the Unicorn: Gladstone vs. Disraeli are named in his bio). The review leads in with much exposition on the nature and history of conservatism.
I’ll likely not be reading the book any time soon. Although it is only 164 pages and, apparently, a good and quick overview conservative philosophy, my list of “must reads” has grown rather lengthy.
According to the review, while the book itself is not “dour,” the message of Scruton is that the conservative tradition is dying. Aldous goes on to suggest that, if there is a hope of survival, conservatism must draw upon its best traditions. Scruton himself, much to the delight of Professor Aldous, suggests that it is the liberal-arts colleges where conservatism can remain alive, no matter what happens in greater society. For those following the news, this may seem particularly improbable.
Also over the past week, I have seen some defenses of conservatism as the election of 2018 gets up to full-speed. William F. Buckley opined that “A Conservative is a fellow who is standing athwart history yelling ‘Stop!'” That surely resonates with conservatives, but doesn’t that explain to a progressive exactly why conservatives are wrong?
Aldous draws a quote from the book that makes, perhaps, a clearer argument.
Speaking about the progression of conservatism from defense of monarchy through its anti-materialism and finally the alignment of conservative and “classical liberals” (libertarians) against socialism.
In all these transformations something has remained the same, namely the conviction that good things are more easily destroyed than created, and the determination to hold on to those good things in the face of politically engineered change.
This quote seems very much on point in today’s political environment.
Also this weekend, I started watching the BBC series from 1985, The Day the Universe Changed.
This 33-year-old show is about the discoveries that shape our view of the world and, in doing so, shape who we are as a society. It goes without saying that much of technology has seen tremendous change over the past three decades. In a particularly glaring example, narrator and creator James Burke makes a statement about how “the telephone” still looks the same as it always has (pointing to the standard issue AT&T model of the early 1980s) but has far more capabilities. But does that even look like a “telephone” to the teenager of today? Or does it look it merely look like an antique that she knows to be an “olden times” telephone because she’s seen it identified as such in pictures?
But oddly enough, his commentary on technology (if not the examples) still seems relevant. The comment about the form and function of telephones is a lead-in to the potential of the “microchip” to enable telecommuting. And while, indeed, technology enables telecommuting today, the discussion of pros and cons in which he engages remains relevant.
Counter-intuitively, the ideas that conflict with modern sensibilities are the philosophical ones. The ideas that most of us, and certainly 1985 Burke, would consider to be far more timeless.
The opening show is about the foundations of Western Civilization in Greek thought and, particularly, the pursuit of practical knowledge and understanding of the world over superstition and religion. This pursuit not only changes our understanding of the universe that we live in, but changes in a fundamental way who we are as a culture and even as individuals. The foundation is an argument for “Western Exceptionalism” that immediately hits the 2018 viewer as bordering on “crimethink.” Could someone get on TV today and say that Western Culture is superior to (as is his example) the Eastern traditions of Nepal? I don’t think so.
Towards the middle of the show, he talks about the rituals and institutions that we have. He specifically dwells on marriage, universities, and courts of law. He explains that we have made these institutions particularly conservative, both in traditions and in trappings. Each of these, we are shown on screen, have examples of its participants dressing up in archaic costumes to participate in the proceedings. Burke explains that this reliance on extreme conservatism in particular corners of our lives is a critical part of what allows our society to progress and flourish. Our culture is built upon the disruptive change that comes from scientific inquiry. A large part of the way we manage the change, and the individualistic thought that drives those changes, is by having certain cornerstones of society upon which we can rely. Deeply conservative institutions – like marriage, universities, and the law – anchor today’s tumultuous world in the ancient traditions of Western Civilization. Our identity can persist in a way that keeps us all sane even as our surroundings change at an astounding rate.
James Burke was not trying to be politically provocative with these statements and these examples. He did not mean “conservative” in the political sense. I would say he meant to draw examples that were self-evident to all his viewers.
Yet, to the viewer in 2018, each of these examples is indeed controversial and very political. Marriage is being devalued across the board while its conservative traditions are being systematically dismantled by the law. In the Law itself, we are moving away from the self-evident situation where law and order was a bastion of conservatism. Political control of the machinery of government remains heavily contested, particularly in America. But recent years have seen opinions abound that progressive has reached (or, at least, is on the verge of) a “permanent majority.” Law and order no longer a clear characteristic of conservatism.
We also see that Burke absolutely agrees with Scruton and Aldous in that liberal-arts colleges are conservative foundations of Western Civilization, an idea that made far more sense in 1985 than in 2018. While universities were already rapidly changing in the 1980s, one could still identify as their purpose to insure that the instruction of the new generation of minds – the minds that are to go on and create the science, law, and culture of the future – had the same foundation in the Greek, Roman, and European traditions in common with generations of their predecessors. Yet today, it seems that the goal is to teach the new, progressive orthodoxy and stifle any opinions that might cause that orthodoxy offense. Certainly the “dead white males” from whom we learned in the 1980s must be offensive to the students and teachers today.
If Burke is right and these conservative rituals are part of what keeps society sane, what are we doing to ourselves in 2018? Progressivism is replacing these historical and universal truths with the “new truths.” Will we have to sacrifice society’s advancement in science and knowledge? Will we go insane? Or are progressives the ones that are right? Is there no virtue in going through the old motions for no better reason than that is the way they’ve always been done?
The last article I read, yesterday morning, finally throws a glimmer of hope athwart the steady march toward dystopia. The Wall St. Journal, again, published an opinion piece (Emily Esfahani Smith of the Hoover Institution) about the Heterodox Academy. A self-described “politically-diverse group” of professors and graduate students has identified and targeted the free-speech stifling environment of 2018 universities. If, truly, we are seeing a broad-based understanding that our society’s understanding of freedom may be hurtling in the wrong direction, we may be able to correct our course.
Hope remains that the twenty-teens may be seen as a weird cultural outlier where, very briefly, political discourse in the West was seized by the politically correct and became a black comedy. As long as the comedy sputters out allowing cooler heads to prevail, we may yet return to the path of progress that we all once enjoyed. But a few dozen professors at a conference is just one small step.
Finally, all this talk of revolution reminds me of a picture that popped up on my Facebook feed yesterday, courtesy of a political activist. The imagery here gives heart to conservatives who feel, one way or another, victory will be theirs. I have no illusion that the Second Civil War will be brief – it will be awful. However, if the recognition of the absurd imbalance between the warring philosophies becomes mainstream, we may yet walk away from this in one piece.
[picture removed out of concerns regarding local hosting relative to unknown copyrights]
I have long intended to watch Metropolis.
The desire came upon me in the mid-to-late eighties. In 1984, music producer Giorgio Moroder created a new version of the film featuring a music score by popular artists. The film was also re-edited in an attempt to create a “director’s cut.” In addition to reworking the film to better match the version as originally released, the black-and-white footage was toned sepia, and the title panels were replaced with subtitles.
Let’s back up a bit.
The original version of Metropolis, as premiered in 1926, had a running time of two hours and twenty some minutes. It met with a mixed response. From a technical perspective the film was widely praised. As a piece of entertainment, however, it was not held in such high regard. At least not at the time.
In response, the production company hired playwright Channing Pollock to re-edit the film for distribution in America. He cut the film to under two hours, simplifying and refocusing the film. He cut almost 50 minutes from the running time, including completely eliminating the character of Hel, under the assumption that American audiences would read it as “hell.” The story was also simplified through the removal of much the symbolism from the original work. Director Fritz Lang was not pleased.
Later, a German version was created along the same lines as the Pollock edit, also removing the heavy religious overtones and perceived communist propaganda. As the Nazis came to power, a 1936 theatrical release was related. This entailed further edits which reduced the film to about an hour and a half. Lang, in later life, would express dissatisfaction with the film as a whole. Some speculate that it was the Nazi influence, first altering and then trumpeting the work as supportive of National Socialism, which soured him.
This leads us back to 1984 and the Moroder’s desire to back out the Pollock edits and return to the original story. While I do recall the release in 1984, it was actually a few years beyond that when my interest was piqued. Following the attention and success of the 1984 version, another attempt at restoring the original film was embarked upon, this time by Enno Patalas from the Munich Film Archive. Using records from the German censorship of the film, he was able to restore the content from missing inter-titles. It was the praise of this version that made me decide that I should watch it. And yet I never did.
In some ways, that may have been a fortuitous move on my part. On July 1st of 2008, a copy of the original version of the film was discovered in a museum in Argentina. This was essentially a “backup copy,” a negative print, created during the 1960s or 1970s from a “positive” which was used at that time. It was a lower-quality copy (a 16mm, reduced frame version) created, not from a master, but from a distribution copy of the film. The negative was kept by the distributor, in part as a hedge against the volatility of the chemicals used to make 1920s-era film. This reduced negative passed through the hands of private collectors and into the care of the Buenos Aires Museo del Cine.
While this version still wasn’t 100% complete due to damage sustained during the intervening forty-some years, it did contain the bulk of the scenes that had been considered lost to the ages. Using the academic work done to date, missing scenes from the original were substituted with the 16mm copy of those scenes to reconstruct the original film. State-of-the-art digital technology allowed the repair of damage that the 16mm version had suffered. In the several cases where the original film was missing and the copy was damaged beyond reconstruction, additional inter-titles have been inserted describing the missing scenes. The resulting creation runs close to the original length (2 hours and 20 some minutes) and is believed to be very close to the original content and is shown using Gottfried Huppertz’s original musical score.
It was this version that I finally watched.
Well, most of it at any rate. I was finally prodded to watch it, as is so often the case, by Netflix’s decision to remove it from streaming. The experience being nearly two-and-a-half hours, I split the viewing up over multiple nights. Fatigue and some miscalculation with how Netflix posts the dates of removal meant I watched all but the last 30 minutes or so. One of these days I’ll figure out a way to see how it ends.
The experience of watching a movie often depends on your expectations going in. Something you’ve read was awful, but watch anyway only to find it wasn’t so bad, will generally leave you with a pretty positive impression. A better impression, even, than when a top-billed movie fails to live up to the hype, even if objectively the latter is superior to the former. So what does one expect when watching a silent, black-and-white movie from 1926?
Obviously we’re not going to be expecting Michael Bay. The film has to be appreciated within the context of what it is – a piece of technology that’s pushing 100 years of age. On the other hand, through the years I’ve read about what a monumental achievement this film is that I couldn’t help going in with some pretty big expectations.
First of all, as the critics said at the time, the technical aspects of the film are outstanding. This was possibly the first full-length science fiction film and, as such, sets a foundation for all sci-fi to come. Now the “special effects,” the 1920s equivalent of CGI, are a little goofy – apparently hand-drawn flashes and stars. But the effects of creating a cityscape with models and paintings is genuinely impressive. Near the end (my end, not the real end) I was actually a little bit surprised by a scene where parts of an underground city are destroyed by flood waters. I was surprised because in the flood scene, the models look like models. This got me because they had looked so much more realistic in earlier scenes where the same models were used as background for long shots. There are also some very innovative techniques. For example, a chase scene through catacombs is filmed using flashlights to convey the impression of a wild, winding flight – when all that is actually on the screen is the single character in a circular spotlight.
So technique aside, what about the film as story? I feel like I’m swimming against the modern tide, but I don’t think the original critiques were so far off. Present day appreciation for the film as narrative often brings up the importance of the “message.” Roger Ebert wrote that Metropolis was “audacious in its vision and so angry in its message.” Others wrote of it as a call for “social change.” I wonder to what extent fans overlook the flaws because they agree with that “message.”
The world premier, on January 10th 1927, followed less than a year after the 1926 British General Strike, where 1.7 million workers walked out in support of coal miners, some 1.2 million of them, who were locked out over wage and hour disputes. Fallout from this strike was a significant factor in the Labour Party winning the plurality of seats in the 1929 elections. In 1927, union power was becoming government power.
In Germany itself, this was the height of the Weimar Republic, with its social and cultural upheaval as well as the financial pressures imposed by First World War reparations. The Communist Party of Germany was, at the time, the largest Communist party outside of the Soviet Union and could draw approximately 10% of the vote. Combine that with other “workers’ parties,” including the Nazi’s, and the strength of “labor” in politics is also evident within Germany. Earlier in the decade, Germany had also seen general strikes, some of which took the form of armed putsches, by militias and war veterans (Freikorps and the Kapp Putsch), by communists (e.g the Ruhr Red Army), and by Hitler’s National Socialist Workers Party (the Beer Hall Putsch). Hold this thought, I’ll come back to it later.
So the context of the time is one of a growing strength of unions and their influence on government. The “dystopia,” then, of the film is one where one hundred years on, workers have been unable to achieve any progress along the then-current trajectory. The film opens showing deplorable working conditions where the laborers are worked to exhaustion in a 10-hour shift, with that exhaustion leading to a deadly industrial accident. It is the witnessing of this accident that starts the protagonist on his journey through the film’s story.
The suffering of the laborers is presented in bombastic fashion that, I take it from the reviews, seemed as such even at the time. Work is shown as a choreographed dance of tragedy. Even the exit of the nearly-dead workers at the conclusion of their shift (10 hours!) shows them trudging in step, an image that the director requires that we dwell upon for many long minutes, to make sure we get the point. As the film progresses, heavy imagery is piled upon metaphor. The city boasts a modern Tower of Babel. As a automatronic flasher is created, Weimar Germany is equated to Babylon. Etc. The hero becomes, not just a “woke” son of privilege with the power to intervene on behalf of the workers, but an actual messianic figure, who will lead the people to the promised land.
This is the bold vision that, I suppose, if you agree with the “workers of the world” narrative, would strike you as a positive. If, from your own world view, it all seems too much, the heavy-handedness detracts (perhaps fatally) from the positive qualities the film does exhibit.
Just to pick on a detail. As the director imagines the future, the wealthy elite are supported by a massive, underground, industrial operation on a scale that dwarfs anything of the “present day.” It is an extension, one supposes, of the German conglomerates of the time and of the industrialization that was taking place in Germany and the world. Point being, the technology portrayed (excepting, of course, the maschinenmensch that graces nearly every movie still) is well within the understanding of those making the film. That is, this massive industrial operation is futuristic only in size and scale, not in technology.
In a scene midway through the film, the protagonist takes over the operation of some equipment from a lower-class worker, having taken pity on his exhausted state. Said operation consists of moving a massive dial, physically moving it by expending tremendous effort, so that the hands of the dial match light-bulb patterns on the dial’s circumference.
Now think this through. The hard part of this process, whatever it is supposed to be, is the control system. The intelligence (and this would have been somewhat futuristic at the time) to determine which light bulb illuminates requires something beyond what 1920s technology was capable. Moving geared levers, however, is something that had been done for hundreds of years. In the film’s earlier scenes we see there is plenty of available power within the industrial operation. There is pressurized steam, giant pistons, and rotating machinery. Once the “system” knows what position the dials are supposed to be in, rotating them would be the easy part, the “low tech” part, if you will. Bleed off a little steam and add in a mechanism to know where the dial stops, and the whole thing could be automated. Yet in our dystopian future, it is for this brute force, physical labor that the workers are employed. They must physically exhaust themselves, unthinkingly following instructions given to them by the machines – by the automated control system that does all their reasoning.
It makes no sense. It makes no sense today when control systems are far more advanced that what was portrayed in the move and human-mimicking robots are becoming a reality. But it also couldn’t have made sense at the time. It’s an “artistic license” to use an absurd situation to create a multitude of suffering workers that simply don’t add up to a coherent portrayal of reality.
Having now missed the end of the movie, I have to not only guess what it all means – the moral of the story – but I have to get that “whole story” from reading about the ending. I haven’t actually seen it. That said, the key theme that the ending expresses was already being used to smack me over the head midway through the movie. “The Mediator Between the Head and the Hands Must Be the Heart.” When focusing on this message, particularly when considering it in the context of the the time, a slightly different take emerges than the labor-empowerment message that dominates most of the film.
Make no mistake – the film is pro-worker-revolution without a doubt. Of all the characters in the movie only the workers (and charity worker, Maria) are portrayed with any sympathy. The “industrialists” are, notwithstanding any life lessons they may be learning, cold and a bit greedy.
But perhaps the message is not meant for society, conveying to them the necessity of being pro-worker. Perhaps the message is to the workers (and those who would support them politically) about the dangers of following the wrong prophet.
Is the movie saying that the “Head,” the captains of industry, as difficult as they may be to love, are still an integral part of it all? Is the message that violence and destruction benefits nobody and it is only through working together that “labor” and “the owners” can move society forward for the betterment of all? Seen in this way, the leaden portrayals of the suffering workers may serve less to illustrate their supposed plight than to connect with the segment of the audience to which the message is ultimately aimed. While one can argue that we need the owners themselves to “have a heart,” that particular message finds no shortage of expression either in 1926 or in 2018.
Another little piece I’ll mention, but I’m not sure of. In several of the more-modern interpretations of the film, it talks about the dehumanizing effects of the “machine.” We see that, particularly in the beginning, as the workers are pretty much enslaved to the giant equipment. However, when the workers rise up and destroy the “heart machine,” it results not in their liberation but rather their destruction. Is this again an example of creating a connection to the audience (those who see industrialization and automation as evil) to only then deliver them the message (destroying the productivity-enhancing machinery will ultimately harm the workers themselves)?
Again, subtleties are lost in both the original presentation and then the attempts to refocus, first by the studios and then by the Nazis, that presentation over the years. I also have to wonder about an interpretation that is a polar opposite of what nearly every modern critic has to say about the film.
Whatever the case, I just don’t see this one as living up to the hype, although the blame for that should probably fall more on the hype than on the production. The industry may owe a debt to this pioneering film, but that doesn’t make it a great 80s film or even, necessarily, bubble it to the top of a “must see” list of the movie buff. When no-one can experience all of histories important film works, two-and-a-half hours may be a bit of an investment just to be able to say, “been there, done that.”