Are you a sentient being that has internal experiences and conscious awareness? If so, you may enjoy this post!
(If, instead, you are a philosophical zombie, devoid of any internal experience, you're still welcome to read it anyway... Just be sure you properly pretend you enjoyed reading it).
In this post, I'm going to use cartoons to explain what that feeling of "conscious awareness" is that we have in our heads. This sensation of internal awareness is an incredibly weird thing, and appears to be completely distinct from other artifacts of our world. How can it arise from physical matter? Why is it so hard for us to converse about it between ourselves, using language? Does it have an evolutionary purpose? In this post, I am going to try and get to the bottom of these questions.
Our exploration of consciousness is going to proceed in two stages: First, we need to think about what it means for something to be "real". Second, we're going to look at the key steps in the evolution of the brain, and why these steps make it difficult for us to appreciate the true nature of our internal subjective experience.
"We don't exist inside the physical world. We exist inside a story that the brain tells itself."
All of us are familiar with the concept that we don't experience the external world directly, but are instead insulated from that world by the processing of information that takes place in our sense organs. However, if we want to really understand just how distant the operations of our minds are from the external world, we need to go much further than thinking merely about the obvious perceptual separation: there are far more profound barriers between our minds and the physical world than the separation introduced by our sensory system. In fact, our minds are so far removed from the real world that it is best to think of it as residing in an entirely different type of reality.
Consider, for instance, this person visiting a grocery store on a typical day:
On the left, we see a picture of some random person's everyday reality: a human being is making decisions as they are walking through a grocery store. Inside that human's brain, different information processing modules are performing the many tasks that are needed for the activity of purchasing groceries. This process involves analyzing visual information, making purchasing decisions, as well as activating modules that weigh the importance of the current activity against other potential activities the human could be doing instead. (I crudely indicated the processing modules that perform such sub-tasks by drawing little colored dots on the brain in the left panel).
This left panel shows us the concrete physicality of the grocery store, and shows us that the human brain delegates tasks to different modules of the brain. However, these things are only distantly related to a person's internal experience. If we want to know what it truly feels like to shop for groceries, we need to look at something more akin to the panel on the right, where I have reinterpreted the same scene in terms of how objects in the grocery store, and the goals that the shopper is setting for themselves, exist within that person's mind.
A person's internal experience differs greatly from any concrete description of physical reality: inside a person's brain, information generated by disparate brain modules is "stitched together" into a very coherent and integrated story, designed to allow that person to make decisions as effectively as possible. This is what I've attempted to draw in the right panel. A person's mind, as they are shopping for groceries, is filled with distinct and well-delineated impressions of objects in the world (in the picture the person is considering the box of Chex and the shopping cart), which are seamlessly combined with internal mental artifacts (such as the person's annoyance at not finding a large box of Chex, or their anxiety about being late for work), in a way that allows the person to effectively perform many tasks, including shopping for groceries.
The critical goal of these panels is to illustrate the way in which our minds do not strictly exist in base reality. Instead, our minds exist primarily inside a story that the brain creates for itself. As a computer programmer might say, our minds are a reality "emulated" by the brain, which makes it fundamentally distinct from ordinary reality: the way time, space and external information are experienced by the mind is greatly warped from how those things exist in the base reality. Also, just as one would find in an emulated computer program, the way our thoughts are assembled and organized in our minds probably bears little resemblance to the way those thoughts were generated by the modules of our brain.
At a fundamental level, the way information is organized in base reality vs. story reality differs significantly. The biggest difference is that these differing realities have greatly divergent priorities: in the "story reality" the most important thing in the universe is YOU. The story reality exists exclusively in order to let YOU gather information and to let YOU perform actions. Of course, your brain tries to hide this fact from your mind, because a mind that is too self-absorbed would negate the evolutionary advantages that caused the brain to create the mind for itself in the first place.
You may have objections to the notion that there are two such distinct realities at play. For instance, you might argue that "it is incorrect that my mind experiences time and distance and objects in a completely different way from how those concepts exist in the real world. For instance, if Event A precedes Event B in the real world, my mind will also unfailingly experience those events in the same order."
First of all, some scientific experiments suggest that the order in which our minds experience sequential events can be highly inconsistent (for instance, consider the classical experiments with brain stimulation done by Benjamin Libet). But more importantly, if you believe this you are in fact overlooking an important truth that your brain is concealing from you: our minds were created through adaptive evolution, and in that context the mind's purpose is to act as an efficient decision-making mechanism for tasks within the "real world". To this end, you are right that our minds duplicate some aspects of base reality in an extremely precise way. However, duplicating all the details would be very inefficient and counterproductive. In fact, the facsimile of the world that we see in our minds is rather crude: the truth is that the physical world, as it truly exists, is extremely different from what our minds think it is.
This is what the real truth of our world looks like:
In the left panel, I made an attempt to illustrate the physical world as it really is: It is, in many ways, completely different from the way it appears to our minds. For one, physicists tell us that base reality differs greatly from our folk conceptions of reality: i.e., it is composed of quantum fields represented by wave functions, instead of the simpler, 3-dimensional Newtonian world that appears obvious to us. Secondly, base reality does not organize atoms into distinct objects such as "table" or "floor", but treats each atom as a distinct entity. The idea that the floor or a table are distinct objects are merely notions existing in our minds (though certainly very convenient notions, from an evolutionary survival perspective). Thirdly, the information we receive from the outside world, for instance in the form of photons hitting the retina of our eyes, is extremely chaotic and noisy, but we are shielded from this fact.
You'll have to excuse my drawing abilities, because these are all things that are impossible to draw, so my cartoon above makes a rather feeble attempt to illustrate it. The point is, if we could experience base reality as it really is we would find it to be very alien, harsh, and mostly incomprehensible.
In the right panel, we can see how this reality is instead perceived by our mind: in essence, given a theater performance within the "story reality", where objects in base reality are replaced by crude facsimiles that our mind lazily accepts as real. (Have you ever wondered why dreams are so easily accepted by our minds as real? As Joscha Bach has observed, this is because this same mechanism is at play during our dreams). Our brain orchestrates a bunch of actors, somehow implemented through special modules in our brain (illustrated as purple wizards in the picture above), that act out the real world in front of our mind, and in exchange the mind then makes behavioral decisions that confer survival advantages to the organism within base reality.
In short, our mind exists in a sort of "safe space" in which it receives highly-edited information about base reality. Specifically, this information has been edited in a way that strips out as much information as possible, while still allowing the mind to perform its valuable decision-making activities. Through millions of years of evolution, our brains have evolved to heavily edit this information to maximize efficiency, but brains have to be careful not to edit it too much, or the decisions made by the mind based on that information would no longer provide a survival advantage.
Note that this concept of the mind I'm presenting differs substantially from that proposed by popular philosophers such as Daniel Dennett, who explicitly deny the existence of a "mind's eye" or internal homunculus. In their view, it is nonsense to think that there's an internal viewer that's experiencing reality as a sort of "theater" that's being performed on its behalf. Here, I claim the opposite: if we had such a theater it would offer an immense survival advantage, and precisely such a theater is taking place every waking hour (and dreaming hour) of our lives.
There was a recent movie which did a very good job of encapsulating the difference between "base reality" and "story reality". Can you guess which movie I'm referring to? Anybody?
... It was the 2019 movie "Us" by Jordan Peele. (Skip the next paragraph if you want to avoid some mild spoilers for this excellent movie).
The horror at the heart of "Us" is of course that we are actually the people in the tunnels, not the people on the surface. The people in the tunnels represent our minds, whereas the people on the surface represent our physical bodies. Do you remember towards the end of the movie, how the underground people were acting out scenes on the surface as if in a stupor? How they were miming out riding on a roller coaster or eating a meal? This is Jordan Peele's vision of what our "story reality" looks like, basically the exact same concept as the right panel in the picture above. The only part the movie got wrong is that the surface is actually the more horrifying place – If the movie had been more realistic, it would have shown the people on the surface fighting to enter the tunnels and enjoy their psychological comfort... not the other way around!
Alright, let's suppose my claim is true, that our minds do not reside in base reality, but instead reside in a different "story reality" that is only distantly connected to our notion of "base reality"... does this get us closer to understanding the notion of consciousness? I would argue that it does, because it means we no longer need to consider consciousness as a sort of distinct entity that exists in base reality. In short, we naively tend to view consciousness as a sort of "magical experience" that transcends normal physical reality, an experience for which normal laws of physics lack any explanatory power. But if, instead, our consciousness is a component of "story reality", then this apparent magic is no longer an explanatory barrier: in the emulated story in which our mind resides, there is no inherent limitation that prevents the existence of "hyperphysical" artifacts. (To phrase it in "hacker speak": the x86 instruction set includes MMX vector instructions that do not exist on an ARM processor, but if an ARM processor is emulating an x86 program, it has no difficulty at all in emulating MMX instructions as well, even though MMX instructions do not exist in the "base reality" of an ARM CPU).
I will concede that this argument feels a bit like a cop-out: the explanation for consciousness we've developed so far fails to address core questions you likely have, such as, "Why does the color red feel different to me from the color blue? Why would evolution require me to experience feelings, instead of simply existing as a mechanical zombie that is devoid of emotional states?"
To address this next level of questions, we need to explore the evolution of our brains in more detail.
Let us divide the evolution of the human brain into three phases. It's possible that the three phases of brain evolution I present in this post deviate from the way our brains actually came about, but that's okay: here, we are merely going to discuss what biologists refer to as a "just-so story". A "just-so story" means we're going to make logical guesses about what happened in prehistory, without meaningful empirical data to support those guesses. For our purposes, such historical speculation will be fine. We aren't concerned about accurately characterizing the brain's history, but are merely building a narrative to characterize the design that the brain manifests in the present.
The three phases of brain evolution I'm going to be discussing are the reflex brain, the stateful brain and finally the modeling brain. Each one of these represented a major innovation in the design of animal brains.
The simplest type of animal brain is the version of the brain that you will read about in any biology textbook. Roughly, this is the version of a brain you might imagine a fish would have. This type of brain operates at the level of reflexes, and uses many hard-wired connections.
For example, let's think about how a fish brain might decide if the fish should eat a donut, such as a raspberry jelly donut:
First, its brain needs to decide if the fish is physically capable of eating the donut: it accomplishes this through a circuit that combines sensory information arriving from the eyes (shown in green in the upper panel) as well as basic decision-making circuits in the brain itself that determine if the item to be consumed is not too big or too small. In order for the fish to undergo the motions to eat the donut, this circuit has to say, "Eat it!"
Anyone familiar with evolutionary theory can easily imagine how such a circuit would evolve and how the circuit would "know" what the right food size is for the fish: in the past, many fish likely ate potential food items that were too large and choked to death, leading to evolutionary pressure to avoid such objects.
Another neural circuit a fish brain is likely to have is a circuit attached to the fish's gut which determines if the fish is still digesting previously-consumed food, essentially a circuit that tells the brain if the fish is hungry. Only if the gut is empty and this circuit yells, "I am hungry!" should the fish consume the donut.
Based on this concept of a simple fish brain relying solely on these types of concrete reflexes, we can already think of the fish as having a mind with a "story reality", but for a simple fish it would be an extremely boring reality:
As you can see in this "warped version" of reality, a version of reality that looks at the world from the perspective of the fish's mind, we can see the simple artifacts of "hunger" and "jelly donut", but they are wrapped in a layer of green, which is my attempt to represent the sensory and motor neuron layers that lie between the fish's mind and the external reality.
What this green barrier is illustrating is that in order for the fish's mind to interact with the food item, the fish needs to use its neurons connected to its eyes to inspect the donut, or use neurons to activate its muscles to eat the donut. The same is true of its perception of hunger, which requires the fish mind to make use of neurons connected to the gut.
The "reflex brain" of a fish has a simple design, but also a rather inflexible one: this is because it relies on concrete connections between inputs (from eyes and neurons monitoring the gut, etc.,) and outputs (animal behavior). The next stage in brain sophistication is to add a layer of indirection between inputs and outputs to create a more general, multipurpose neuronal architecture. With this type of design, an animal's eating behavior isn't just directly triggered by neurons monitoring the gut. Instead, the brain can now think "Animal is hungry!" and maintain that as an ongoing fact to guide behavior. As a programmer would say, the brain can process information more efficiently by maintaining "state" to represent properties about itself and of pertinent objects in the outside world.
This type of approach is more flexible and efficient for multiple reasons. First of all, it allows the animal to take noisy signals it receives from its internal and external sensors and "smooth them out", instead of relying on continuous sampling of its sensors. In this way, an animal can maintain "object permanence" and not forget that it's hungry as its hunger pangs come and go, or forget that it saw a food source, even if it is temporarily obstructed from view. Secondly, having an internal representation of itself and its environment allows an animal to reuse brain circuits for multiple purposes, without needing to evolve each of them independently. This means the brain can think "Animal is scared!" and flee a predator, no matter how this fear developed- Regardless of whether the animal was bitten or saw a predator with its eyes, the same evasive actions can be taken. In short, if the brain has an internal representation of the identity of the animal and of important objects in the environment, it can evolve higher-level circuitry that benefits from the simplification provided by this representation.
This stage of brain development, which I call the stateful brain, is where a "story reality" truly starts to take shape: once the brain has state management to represent the animal's identity and the identity of significant objects in the external world, the "theater of the mind" can really begin to take shape.
It is hard to say when animal brains first developed notions of object permanence with internal state management, but I imagine that the stateful brain is what a mouse might experience. Here is how we might illustrate the "story reality" of such a mouse:
Notice that artifacts in this reality are now wrapped by layers of two different colors. As before, we have a green layer representing the barrier of sensory and motor neurons separating the mouse's mind from the donut and perception of hunger. But also, we now have an additional layer of indirection introduced by the brain's state machinery, which maintains its own facsimile of the donut/hunger to improve object permanence. This is the layer that is performing the theater on behalf of the mouse's mind.
The state machine of the mouse now also makes it possible for the mouse's brain to maintain more general abstract notions that are NOT directly tied to its senses, i.e., the mouse's story reality can now contain artifacts without a "green layer". This includes notions such as "fear" and "anger", which can persist over time without requiring continuous sensory input.
The final major leap in brain development is when animals developed the ability to model the world across time. Again, it is hard to say exactly when this ability developed, but certainly monkeys and apes seem to possess an ability to model the outside world. When an animal with a modeling brain sees a donut, it not only thinks of it as an object in the outside world with properties... it also can make inferences about the past and the future of the donut. For instance, it can speculate that a human likely placed the donut there, and can predict what it would feel like to eat the donut, or what the donut might look like if it was left, untouched, for [what humans call] a month.
Humans have mastered the art of looking at the world from a modeled perspective, and the ability to model has become integral to how we think. Additionally, our social nature has reinforced this ability to model, since we use language to encode the models of our external world and can discuss them with other humans.
The ability to model the world is something that is deeply integral to how our brain works, something that was introduced into the brain (once again) through millions of years of evolutionary pressure. The purpose of modeling the world is to enhance our ability to make decisions by predicting likely future events in the external world.
This is what the "story reality" of an ape might look like:
In this picture, we now have yet another layer of indirection between artifacts represented in this reality and the ape's mind: when an ape sees a donut, it doesn't just perceive it as an object with various properties, but also as an object imbued with many expectations about the future and past of that object, based on internal models. This means that when an ape sees some unripe bananas, it's likely that it will say to itself, "I should come back in a few days; the bananas will look tastier then." A mouse, on the contrary, would most likely be unable to do so.
However, notice that the ape's internal instincts, such as hunger or anger, are not marked as being affected by this new "modeling" layer. I would argue it is a general rule that our brain is not evolved to model internal instincts like hunger or anger. This is because, at the timescale in which evolution operates, it would always be more efficient to simply modify these internal instincts directly, instead of building modeling infrastructure around them.
To understand why our brain would not be intrinsically designed to model internal states, let's look at anger as an example. We, as humans, of course have a model for how anger works: an adult human understands that sometimes it's better to "count to ten" when we feel angry, because with some time we realize our own anger in a situation will dissipate, and that in a modern society intense outbursts of anger are best avoided. However, this is a skill we learned through culture; it is not intrinsically "built into" our brains. We know this is true precisely because of the great lengths we need to go in order to override the "raw hardware" of our brain in these instances. This is why, even as adults, we might resort to "counting to ten" to perform the unnatural act of quashing our natural tendency towards anger.
However, our pre-cultural primate ancestors would not have had any reason to treat anger in this way: if high anger is maladaptive in a pre-cultural primate's environment, evolutionary pressures can simply "dial down" anger directly. Over evolutionary timescales, it is far more efficient to modify internal states directly, as opposed to making the organism expend energy creating complex, generalized models of internal states.
So my thesis is that our brains are not evolutionarily designed to model our own internal states. Instead, the brain's modeling ability is focused primarily at making predictions about events in the external environment. Our brains can effortlessly model how a rock might roll down a hill, or how a predator might stalk us, or model how a banana ripens over several days. However, our brains are not intrinsically designed to model concepts such as hunger or anger in an abstract way.
While it is true that humans do actually possess the unusual ability to model and make predictions about internal states, this is something that arose only in the last few thousand years. It is a "hack" that we learned and spread to our peers through cultural means. It is not something that the raw hardware of our brains is equipped with.
The ability to model the world and make predictions truly sets apart higher animals like primates from other animals. Humans, in particular, have taken this modeling ability to a whole new level, by being able to communicate these models to other humans through language and culture. However, when it comes to internal artifacts in our mind's "story reality", they are not part of these models that our brains create on our behalf. Instead, we experience sensations such as "hunger", "fear", or "the color red" in a more raw form, without a layer of indirection introduced by our brain's modeling hardware.
I posit that this is the reason why we have such a hard time talking about the internal sensations of our minds, or even reasoning about them in an effective way. When we talk about how we experience feelings and qualia (the word philosophers use for experiencing sensations such as "the color red") and how hard it is to understand what these sensations are or why they exist, the reason is that feelings and emotions, in our reality, come about due to an absence of something: they represent those entities in our mind that are not managed by our brain's modeling system. In the diagram of the ape's mind above, they represent the artifacts that do not have a blue wrapper around them. These artifacts are the ones that we view as being part of our self, and at the same time this is the reason why it is hard to describe these artifacts effectively with language.
All of us have a subjective internal experience that seems to transcend the physical world, and seems to lie beyond our own explanatory power. This experience is what we call "consciousness". In this blog post, I suggest the experience of consciousness can be explained as follows:
Consciousness is a very mysterious concept and it is so very, very difficult to imagine how such a thing can exist in the rather mundane, base reality that we are a part of. To me, the ideas that I have sketched out in this blog post go some distance to explaining how it can be possible for this to be the case, how something such as consciousness can exist within the physical world.
I'm happy to hear thoughts that other folks, such as yourself, might have that could further clarify these perplexing questions, perhaps the most perplexing of all questions! Feel free to send comments my way on Twitter.
Are you looking for a VR RPG game with lots of strategic depth?
You will enjoy Walking Dream, a single-player adventure game targeted for the Oculus Quest in early 2021. This game project led by classic game developer and illustrator Conrad Barski1 combines elements from both old and new RPG games in a completely new way!
(Also, the project is getting tons of game design help from Dave Pickett of BRICK 101)
After falling into a vat of chemicals and being in stasis for 500 years, Derek X wakes up in a post apocalyptic America, strangely mixed up with fantasy creatures. To get back to own time, he will need to use a futuristic toaster to fight many brave battles. This will require him to go on the hunt for ever-more-powerful slices of cybernetic toast to vanquish the many powerful enemies in this new world.
If you own an Oculus Quest helmet, you'll love Walking Dream: It's been designed 100% for the quest, but is completely different from other Quest games. No other game offers a similar turn-based mechanic, unique weapons crafting system, custom cartoon-style 3d artwork, or a similar redirected walking engine for movement. Expect a completely new type of game for your Quest!
If you've played old-school RPG games (such as the early Ultima games or Heroes of Might and Magic) you know how different the RPG gameplay was in these games, compared to modern RPG games: This is mainly because the turn-based design of movement and actions in these games allowed the play to be more thoughtful and strategic. Walking Dream brings back this turn-based experience!
Lots of RPG games feature combat that is almost identical from foe to foe, forcing you to grind through lots of repetitive enemies to defeat the game. Walking Dream, on the other hand, is built on a brand new combat engine with a rich weapons crafting system. This system is shared by both the player as well as the many enemies, and turns each battle into a completely novel experience, requiring completely novel strategies at every encounter!
The expansive world of Walking Dream is built using a custom, low poly cartoon modeling system. This allows us to give our game an expansive world, filled with tons of custom artwork with a unique style. Walking Dream is not just another indie game built from prebuilt, cookie-cutter 3d models!
The ultimate dream every player has when using their VR helmet is to simply be able to walk around the world, carefree but also without bumping into the couch of their living room! As its name implies, Walking Dream makes this possible, using a new redirected walking engine that uses several new programming tricks to make this type of locomotion more practical than ever before!
Release of Walking Dream is planned for early 2021. Check back here on walkingdre.am for the latest info!
In this post, I will be sharing a horrible, no good hack for the Emacs Cider Clojure REPL that you should definitely never use yourself.
When debugging or executing a Clojure/Clojurescript program, I often want to execute functions in my program from the Clojure REPL. However, as much as I love the Clojure syntax for programming, it's pretty inconvenient for regularly executing commands in a program you've written.
For instance, I have a task tracking app written in Clojure. Every time I wanted to add a new todo item through the REPL, I used to have to type something like this:
> (a "iron curtains")
Though I was able to streamline the command by having the single-letter function name, this command remains incredibly awkward! What I really would prefer to type is this:
> a iron curtains
Syntax error compiling at (*cider-repl*:0:0).
Unable to resolve symbol: iron in this context
Syntax error compiling at (*cider-repl*:0:0).
Unable to resolve symbol: curtains in this context
Of course, since this shorter command does not obey the parsing rules of the Clojure reader, it just leads to syntax error in the REPL. This raises the question: Is there any way we can work around this limitation and enter custom commands into the Clojure REPL, using our own rules for parsing the command? The answer is... yes, sort of.
At this point, some of you may remember my classic Lisp tutorial, named Casting SPELs. In this tutorial, I implemented a text adventure game in a Common Lisp REPL, letting you enter commands such as "GET BOTTLE" and "WALK EAST". Naturally, the tutorial had to make compromises to work around REPL limitations, for the exact same reasons I'm outlining in this post... so as you can see, I've already had a decade-long love-hate relationship with inflexible Lisp REPL syntax.
Let's try to find a work-around to this issue once and for all!
Certainly the Clojure REPL will never, ever accept "
a iron curtains" as valid syntax in the reader, unless we're prepared to write our own custom fork of the Clojure programming language. But if we're using Emacs to run our REPL, we can do a little somethin' on the Emacs end of things, to wrap our command in a way that the Clojure reader will accept:
(defun wrap-clojure-command ()
"takes a command written at the cider prompt and executes it as (repl-command \"...\")"
(insert "(repl-command \"")
(global-set-key "\M-z" 'wrap-clojure-command)
With this command, we can now enter custom commands into the Cider REPL, then hit the
\M-z hotkey instead of
ENTER to execute these commands. Then, the Emacs function wraps our REPL command within a standard Clojure string, inside a function call:
> a iron curtains
> (repl-command "a iron curtains")
To take advantage of this Emacs hotkey, all my Clojure/Clojurescript programs then have a function named
repl-command that is then responsible for parsing my REPL string and performing the appropriate action.
By default, Cider launches new REPL sessions in the
user namespace. I like this default behavior, since it forces me to explicitly declare any namespaces I'm interacting with as I work in the repl. However, if I'm executing one of my custom repl commands, I don't want to have to deal with namespacing issues. Therefore, I have the following function declared in my personal Clojure libraries:
(defn user-repl-command [repl-command]
(binding [*ns* (create-ns 'user)]
(eval `(def user/repl-command ~command))))
This command takes an arbitrary function and declares it in the
user namespace as
user/repl-command. So at the bottom of the "core" module for each of my Clojure apps I can now declare my custom REPL command handler, to "register" it for use in the REPL:
Now I can immediately execute arbitrary commands for my Clojure app right in the Clojure REPL, using my own custom command syntax, even in the default cider REPL namespace!
Certainly, this is all very hacky, but I've used this workflow all day, every day, for about a year. This, alone, seems like a good enough reason for sharing my REPL workflow with others.
But certainly I'd love to come up with a less hackish solution to this problem. If any of you have ideas for cleaning up my worflow, please share them on twitter!
Since I just got into 3D printing, it was inevitable that I would create an official Lisp Alien 3D model. I've finally been able to print it out, and today my 8 year old daughter helped me pose it in some killer action shots!
It comes in two STL file versions, depending on whether you want to make a single material or dual material print. Grab the files here:
I modeled the Alien in Shapr3d on an iPad Pro. For a modeling program that is used primarily for industrial modeling, I was pleasantly surprised how well I could create the organic shape for the Lisp alien body. The reason I've decided to learn how to use a more "industrial" program is that I've had trouble in the past with 3D modeling in that these sorts of programs can get glitchy when you start creating more complex designs. Shapr3d did not disappoint me, it could handle very messy boolean operations and beveling without even a single hard crash.
I printed the physical copy you see in these photos on my Elegoo Mars Resin printer, using Elegoo Clear Green for the body & Elegoo Standard White for the add-ons. It came out almost perfect, though I definitely had to sand it a bit to get the smooth alien skin as a finish.
Here you can see some more shots of the alien going about its day:
I gave the alien a hard time for drinking while riding his ATV, but he swears it's fine because he was driving on private property (I have my doubts though that that's correct...)
Anyway, I consider these models to be public domain, feel free to use them for any physical or digital projects that you like! If any of you 3d print additional versions of the alien, be sure to send me a tweet!
I like to think of myself as a creator: I love to program, write, create art, and build stuff!
And, like most other creators, I love reading about crazy productivity schemes that promise to help curb procrastination... oh, to finally attain the dream of ultimate productivity!
But at the end of the day, there's only one such system that ever helped me in any appreciable way: The classic "Getting Things Done" system. (Read the box below for a summary of the minimal variant of GTD I've used for many years, if you are not yet familiar with this system)
However, despite its benefits, it always felt to me like GTD just doesn't work well for Creators. In this post, I'm going to describe a new productivity system I've created, called Task Tension, that makes the GTD system more compatible with the creator mindset.
To understand the rest of this post, you'll need to have a basic understanding of the GTD system. Those of you who already know GTD can skip this box, which describes the simplified variant of GTD that I use.
In my minimal version of GTD, all you need to do is maintain four lists:
For a quick example, a typical task for me might be "Cancel Hulu". An example project might be "Get Oil Change" which would have a concrete task associated with it of "Call Auto Shop for appointment" (Note that all projects in GTD should always have at least one "task" or "waiting" entry associated with them, otherwise the project would be permanently stuck.) Finally, in my "someday" list I have items such as "take online Linear Algebra class", which is not an active project right now, but something I want to keep in mind as a potential project in the future.
There are several other features that are part of classic GTD that I personally do not use and are not relevant to the remaining discussion in this post (Such as the "Inbox", "Contexts", "References" or "Triggers")
Creators are people who creates things for others, whether through writing, art, software, music, or another medium. There are many traits and strengths that are shared amongst great creators. However, almost all of them are good at handling feedback, and good at handling their obsessions. Those are the two most important traits every creator must foster in themselves.
A creator feeds on energy, and this energy can come from both internal and external sources. Audience feedback is the most important external source, which helps you to assess the quality of your work and provides encouragement. A creator's personal obsessions are their primary internal source of energy, and are essential to producing great work. However, you have to properly channel obsessions, or they can get out of control and become counterproductive.
So how do these two key traits of creators relate to GTD? To find out, here is a quote from GTD, one that I think really gets to the heart of what GTD is all about:
There will always be a large list of actions that you are not doing at any given moment. So how will you decide what to do and what not to do, and feel good about both? The answer is, by trusting your intuition.
– David Allen
As you can see, David here is very careful not to put any hard-and-fast rules on which momentary tasks have preference. This is because, at it's core, GTD is a system for minimizing anxiety. The reason GTD is so effective for people is that it first makes them organize their tasks in their totality, then gets them to complete these tasks based on how much anxiety each of them is causing. (Later in the book, Dave does provide some more concrete guidance on task choice, based on things such as "energy level" or by fitting tasks into your longer term vision, but even with this later guidance the primary focus remains on minimizing anxiety.)
The Task Tension system I'm now going to describe has the exact opposite objective: It provides exactly the hard rules for selecting tasks moment-to-moment that is missing in GTD. This is because productivity for creators is precisely about selecting tasks in a smart and methodical way: Creators like us need to select tasks in a manner that leads to regular releases to get the feedback we need and crave (to fully gather external energy) But on the other hand, we also still need to use our obsession to create longer and more thoughtful works, free from the constraints of hard deadlines (to fully harness our internal energy.)
The Task Tension system is a modification of the GTD system I've created in order to make GTD compatible with the Creator mentality. To accomplish this, it modifies GTD in three ways: Enhancing the project list, logging hours on creation tasks & changing the task selection process.
You continue to maintain projects on your project list as with GTD, but any projects you have that involve creating something (called "creation projects" or CPs for short) now have additional rules in the system. A "creation project" could be writing a blog post, shipping a video game, releasing a song. In short, any project you have that involves creating something for an audience falls into this list. Try to avoid distinguishing between different forms of "creation", as each CP worth your time should be equally meaningful to you: Writing your blog post is just as important as carpentering a table or writing an entire book. With Task Tension, all of these are similar forms of creation.
Here are the new rules for managing CPs on your project list:
There must always be exactly four CPs in your project list. You should always make sure you have additional CPs in your "someday" list. That way, you can maintain the four item target by adding a new item every time you complete an existing CP. (Other "non creative" projects such as "Get oil change" are not affected by this limit and continue to coexist on your project list in the regular GTD manner.)
When a new CP is added to your project list, also add a time estimate for how many hours you guess will be required to complete this creation project. Note that this estimate can be very rough, and does not determine when the CP will be completed. This system purposely makes no attempt to quantify the rate at which these hours of work will be completed.
The hour estimates for the CPs in your project list must fall into the following scales, in terms of the number of hours they require:
You must have exactly one creation project in each of these hour ranges, so that your active CPs always vary in their scale. This means that if you want to put a new CP in the list that conflicts in scale with another project, you have to stop yourself and instead find another CP that is shorter/longer and therefore doesn't conflict in scale with your other active CPs.
Whenever you complete a task that is part of a creation project, you will now roughly log the number of hours you spent in a new Tracking Log which is a new tracking sheet that gets added to the other four core GTD lists (e.g. task list, project list, waiting list, someday list.)
The Tracking Log is only used to keep track of four numbers, which is the total number of hours you've spent on each of the four project scales- So it will look something like this:
Note that these four numbers just increase in size indefinitely, and it is perfectly normal to have more hours logged at a scale than the size of that scale. (That just means you've already completed several projects at that scale, and the total time spent on projects at that scale is higher than any individual project's size.)
Also, if you don't fully complete a task but reach another natural breaking point (such as lunch, the end of day, or you simply have run out of steam on that project) it is also fine to log your partial hours early, without completing the task entirely first. This makes it easier to remember how much time you've spent, and also gives you the positive feeling of marking some progress towards your goals.
As promised, Task Tension introduces a new, rigorous moment-to-moment task selection process to GTD. It goes as follows:
If you follow the rules above, you will at all times have four active creative projects happening at all times. This includes a project that is only a few hours in length, all the way up to a HUGE project that consisting of many hundreds of hours. Moreover, you will be spending a roughly equal amount of your time on projects at each scale, releasing many smaller projects for each of the larger projects.
Following the strategy I've described, you will be able to use your obsession to focus for long periods of time on a project, but not to the exclusion of other creative and noncreative tasks, which is essential to remaining healthy: If you just seclude yourself in your basement for a year to create "that one epic project" it will likely not be good for your mental health, and probably won't be good for your artistic vision either (since you are denying yourself the feedback you get from your audience by also working on smaller projects at the same time.)
Task Tension is all about balance: A creator has to maintain balance in their workflow in order to be successful. This balance needs to include feedback on smaller projects from their audience (which will send lots of positive energy your way in the process) but also needs to give the creator space to pursue a larger artistic vision. Hopefully, Task Tension (or some similar system of your own devising, based on these concepts) can help you develop such a positive workflow for your creative ideas!
Since I'm a software developer by trade, I have of course already built my own app that implements Task Tension. However, my current app is highly specific to one person's workflow and would not be usable to other folks yet. However, if it turns out enough other people find the ideas in the post I may generalize my code and release a more universal implementation.