Aug 30 2011

How Do Superpowers Affect Your Characters’ Perspectives?

Published by at 2:53 pm under Superpowers,Writing Articles

One aspect about Alphas that seemed really believable and well-written to me was that a villain that could control physical events and influence probabilities became paranoid, reading malevolent intent into the failures of others.  He had trouble understanding that most people don’t have that level of control.

 

Here are some other possibilities that come to mind.

 

1. Psychics might be very cynical or very optimistic about human nature depending on whose minds they have read.  In a situation where their ability to read minds does not work (such as using email or talking over a phone), they may or may not be wildly distrustful because they don’t have the ability to know whether they’re being lied to.

 

1.1. A psychic might have privacy issues.  Courtesies that might seem commonplace to most regular people, like reading a suspect his Miranda rights or not listening in on a private conversation, might not make any sense to a psychic.  If the character grew up with other people that also had psychic powers (like an alien civilization), this would probably have a major impact on how he interacts with other people.  For example, if you grew up among psychics, you’d probably be used to everybody in a conversation knowing everything important already.  In a conversation with normal humans A and B, you might unwisely reveal something to B that A wants to keep secret.

 

1.2. A psychic might have major identity issues, particularly if he/she doesn’t much control over the psychic powers.  For example, the psychic might have trouble distinguishing between his/her own thoughts and the thoughts of people nearby.  In The Taxman Must Die, one decidedly scrawny psychic can’t quite remember whether that memory about rampaging through a bank vault is his or somebody else’s. This is one of the limitations I use to keep the psychic’s powers from short-circuiting the mystery angle.  He remembers somebody committing a crime, but that memory has given him only a few vague clues to pursue.

 

2. A character with incredible speed and/or reflexes might perceive time as passing very slowly.  If he does so all the time, he might get impatient with people that move/talk/think much slower (i.e. everybody).  For a character with incredible reflexes, time might only seem to slow down at particular moments, like stressful events or danger.

 

3. Somebody with the ability to control and/or influence a particular element or phenomenon might be really sensitive to it. 

  • Somebody with the ability to control heat/fire or ice might be more sensitive to temperature changes, like somebody getting chills when they feel scared.
  • Somebody with magnetic abilities might feel metal objects moving and might get bothered by rush hour.  Maybe your Magneto can feel Wolverine approaching because Wolverine’s skeleton is mostly metal.
  • Somebody with the ability to influence/control plants and/or animals might pick up environmental cues other people miss.  For example, maybe your plant-controller is more likely to notice snapped twigs, a slight indentation in a patch of grass and/or leafs knocked from the top of a bush and conclude that somebody came through here in a hurry.  The ability to empathize with plants and/or humans might affect the character’s mindset, as well.  For example, Poison Ivy hates on humans (those plant-killing fiends!) and Beast Boy is a vegetarian.  Incidentally, I think the best reason to be a vegetarian is not because you really like animals, but because you really hate plants.

 

4. Superpowers, incredible abilities and/or experience might make somebody more precise in a particular way. For example, if a character has a time-related ability or is as meticulous as Batman, he might avoid figurative phrases like “in a minute” unless he’s actually talking about sixty seconds.  This could lead to annoyance/confusion when they’re talking with people that aren’t so precise.  (“Why did you say you wanted a minute if you really wanted five minutes? That’s not even close!”)

 

 

5. Someone unusually intelligent might be unusually confident if he/she has had enough success shaping events to their will.  Alternately, more intelligent people might actually be less confident because they’re more aware of their limitations and failures.  Or both!  Ozymandias was confident enough in his mental abilities to destroy New York City because he was sure it was the only way to save the planet, but he wasn’t sure that he could grab a bullet out of the air.  (PS: If a superintelligent character is totally confident, he might have a crisis of faith if he fails to anticipate something or gets outmatched by events and/or an adversary).

 

6. Somebody with heightened senses might notice seemingly inconsequential details.  For example, Sherlock Holmes creates and tests theories of a crime by focusing on minor details. (For example, in Holmes, he figures out that a supposed suicide is actually a murder by proving that the victim was actually left-handed).

 

 

7. A character that has incredible abilities might have trouble dealing with low-level threats and situations.  For example, if you’re strong enough to hurl a tank, it might be really hard to just incapacitate an unpowered thug without breaking at least a few bones.  Alternately, the character might be so concerned about avoiding unnecessary damage that he makes some tactical decisions that allow criminals to escape and/or have other undesirable consequences.

 

7.1. Incredible superpowers would probably make it more challenging to maintain a secret identity.  If a super-speedster can run faster than 100,000,000 miles per hour, he might have trouble distinguishing between 30 and 60 miles per hour.  Wouldn’t you notice if Wally West started running twice as fast as Usain Bolt?

 

8. Somebody that’s grown up with a superpower might have trouble relating to people that don’t have it (and vice versa).  If a superhero (or villain) has a power that really affects how he/she experiences the world, it might raise interesting social, mental and/or medical challenges.  For example, try imagining if you were the only psychic on Earth.

  • How do you describe your psychic experiences to a non-psychic?  (Which non-psychics would you feel comfortable enough with to try? Would most people find your powers unsettling and/or dangerous?)
  • If you need a second opinion about a psychic or supernatural experience–“what do you make of X or Y?”–who would you ask and how?
  • If you have some sort of medical issue relating to your abilities, who would you talk to?  It’s unlikely the Mayo Clinic has seen anything like this.  (Likewise, what if a superstrong character strains a muscle while stopping a train? “Take two aspirin and call me in the morning” probably won’t suffice).

 

9. A character new to superpowers might have trouble getting used to new sensory experiences, particularly at first.  For example, hearing everything within a block of you could be a hassle when you’re trying to sleep.   If your sense of smell has gotten a hundred times better, eating might feel decidedly unusual.

39 responses so far

39 Responses to “How Do Superpowers Affect Your Characters’ Perspectives?”

  1. Brian McKenzie (B. Mac)on 30 Aug 2011 at 3:21 pm

    PS: Thanks to Wings for helping me brainstorm on this. If you’re also interested in SN-related brainstorming, please let me know at superheronation-at-gmail-dot-com.

  2. Mynaon 30 Aug 2011 at 3:52 pm

    Oh hey, this is REALLY helpful, I never thought of superpowers in quite this light and how they would affect your mindset. The only one I really figured was super-speed, and being impatient because of the time slowing down thing. Thanks!

  3. Grenacon 30 Aug 2011 at 4:20 pm

    “I’m not a vegetarian because I love animals. I’m a vegetarian because I hate plants.”

    – That is one of the best quotes I’ve heard in a long time.

  4. ekimmakon 30 Aug 2011 at 8:10 pm

    “I wouldn’t think someone who can talk to plants would be a vegetarian.”
    “That’s because you’ve never talked with them.”

  5. B. McKenzieon 30 Aug 2011 at 10:43 pm

    I injured myself laughing, Ekimmak.

  6. ekimmakon 30 Aug 2011 at 11:05 pm

    Pretty sure it could be funnier, though.

  7. The ReTARDISed Whovianon 31 Aug 2011 at 4:53 am

    Huh. I never thought of this before, this will be really useful for me. 😀

    Also, hiding a huge secret would stress anybody out. Now, imagine that secret was that you had superpowers and had a secret identity. Wouldn’t you be worried that maybe your mother would nag you about it being dangerous? That if your girlfriend finds out and you break up, she will tell everybody? That maybe you will be blackmailed? I think that would definitely affect somebody emotionally/psychologically.

    They could become withdrawn and suspicious of everybody, develop depression, have a breakdown, or feel the need to lie about absolutely everything on the off-chance that somebody fits the pieces together and works it out.

    Perhaps it could even lead to guilt issues – I think it was Spiderman 2, where Peter gives up on being Spiderman, and because he doesn’t use his powers when saving the little girl from the fire, he doesn’t have time to save two other people who are also trapped.

  8. Wingson 31 Aug 2011 at 7:01 pm

    I love messing around with this. One initial trait of my plant manipulator is that he’s a self-proclaimed meatitarian because he doesn’t want to eat his plants.

    …I should really get a nifty author bio like all the rest of you.

    – Wings

  9. B. McKenzieon 31 Aug 2011 at 8:44 pm

    If you’d like me to insert something in your author bio, please let me know!

  10. Max H.on 05 Sep 2011 at 7:06 pm

    I use 3.1 a lot. My umbrakinetic has perfect night vision, the aerokinetic can detect minute drafts and changes in air pressure (so she can detect when a storm is coming or if a door’s been opened 100 feet away), the cryokinetic is immune to cold, etc.

  11. GigawattConduiton 09 Sep 2011 at 4:13 pm

    I’d imagine that someone who absorbs energy would become addicted to it, like a drug addict.

  12. Damzoon 10 Sep 2011 at 7:09 am

    Yeah, they would but those that are strong would be able to resist it, I think.

  13. Grenacon 17 Jan 2012 at 8:56 pm

    Question: Does a character with enhanced speed have to fit the common personality traits that accompany that power? (Like impulsiveness, etc.) Because my character is neither of those things. She’s things…just not those.

  14. Mynaon 18 Jan 2012 at 3:54 am

    I don’t think so, Grenac. There’s always room to change up your character a bit from the norm, so I don’t see why your char would have to have the common personality traits that fit her power. : )

  15. Marquison 18 Jan 2012 at 7:54 am

    Goodness no I hope your character has a less predictable personality.

  16. Tealon 21 Jan 2012 at 10:55 am

    But… animals eat plants. The whole food pyramid thing. To raise a cow for eating, you have to feed it a whole lot of grass. It’d be cheaper plant-wise to just eat the plants straight; the cow is an inefficient conversion mechanism.

  17. B. McKenzieon 21 Jan 2012 at 2:17 pm

    “It’d be cheaper plant-wise to just eat the plants straight; the cow is an inefficient conversion mechanism.” In a very abstract way, that makes sense, but in a more human way, it totally makes sense to me that a plant-themed character would find it more pleasant to eat animals than plants. He can even think of it as a death penalty for cows. 🙂

  18. Tealon 21 Jan 2012 at 7:18 pm

    Said cows wouldn’t even exist if they weren’t being raised (and fed) to be eaten; by buying beef, one is essentially saying to the beef industry “raise more cows, please.”
    If someone doesn’t want to harm plants, a better approach would be to eat plant products that can be gathered without harming the plant, such as fruits. A very plant-themed character could try to develop photosynthesis or some other energy source for himself.
    In other words, sorry, the logic here does not add up for me. (I do like this blog in general, though; it’s got some great insights.)

  19. deadmanshandon 22 Apr 2012 at 5:36 pm

    I have a character that I am considering the perspective ramifications of his abilities and I’d like to see what those here think.

    Jared Bell

    He is a crude shapeshifter with unnatural strength and resiliency. The more he exerts himself the more alien his nature becomes as his human facade breaks down. Basically, Jared is a shoggoth – not literally but it’s a good analogy – playing at being a man. A protoplasmic, alien creature that forces itself into the form and senses of the man it once was.

    This has drawbacks of course. One being the above mentioned dips into a squamous Lovecraftian entity’s psyche and senses when he pushes himself. Two is that he is – at best – a primitive shapeshifter. Not capable of mimicking so much. Which has the unfortunate side effect of his human appearance never quite being the same every time he resumes it. Effectively his face will always be that of a strangers to him. Always changing over time.

    What do you think?

  20. B. McKenzieon 22 Apr 2012 at 7:40 pm

    I think the limitations here are interesting–in contrast, I think most of the shapeshifting heroes so far shapeshift effortlessly and flawlessly (e.g. Mystique and Martian Manhunter). In addition, since he’s not actually a human to begin with (or even close to one), you might be able to make his perspective more three-dimensional by having him struggle to “act” human–for example, even something as basic as eye contact might be really difficult for a different species.

  21. deadmanshandon 22 Apr 2012 at 8:05 pm

    Maybe I worded part of that wrong he was a human who through the origin event becomes this. Part of his backstory is that it took several weeks for him to regain human form.

    The video is epic by the way.

  22. B. Macon 22 Apr 2012 at 10:02 pm

    “Maybe I worded part of that wrong–he was a human who through the origin event becomes this.” Ah, understood. Well, I think he could still have some interesting perspective issues. For example, Dr. Manhattan was originally human and perhaps more interesting because he was aware of what he was missing.

  23. deadmanshandon 22 Apr 2012 at 10:35 pm

    I’d had a few thoughts along that route but I’m too tired at the moment to coherently state them.

  24. deadmanshandon 23 Apr 2012 at 5:52 pm

    I was thinking that it may affect his attention to appearance. His own and others. But whether in an overly attentive kind of way – working at memorizing faces, even keeping a visual journal of his own changing appearance – or in an entirely neglectful way due to the fluidity of his own self image. I guess the actual question is which is more interesting? Or are there other options that I haven’t thought of?

  25. Isabellaon 14 Jul 2013 at 4:25 pm

    This is pretty late, but I think one way someone who talks to plants could blow their identity is by not knowing what they can do until a few chapters into the story (or comic). I’d think it’d be interesting to see a vegetarian get a little freaked out once they find out they talk to plants, because then she might want to stop eating plants, and someone would definitely take notice if they knew she was vegetarian and saw her eating a hamburger. It could also be hard for her to adjust to eating meat (especially if they grew up in a vegetarian home or became vegetarian at a young age), so that could lead to something (like, if the character gets sick from eating meat, they’d either have to suck it up and keep eating meat or resort back to eating plants–and of course there are other food groups and things she can eat. I’m just saying if she did eat meat, it could blow her cover).

    And I also have a question–how would someone who talks to dead people react? Would their perspective change like a psychic? Would they end up having privacy issues, too?

  26. Yuuki12on 12 Aug 2013 at 8:26 pm

    I thank you for posting this article. It has been a great help, fleshing out my character Derek. Given his powers over sound manipulation(noticeably his enhanced hearing), he was sensitive to loud places. With the story taking place in Seattle, the feedback downtown played at his ears. Even in crowded areas, like his school cafeteria, he couldn’t stand the noise, and would step out.

    It is this characteristic I emphasized even after he learned to control his ears. Derek learned to appreciate subdued, less quiet places( like forests for example), and also made him be a more active listener to everything.

    How is that for developing a perspective? Does it need to be fleshed out more?

  27. Bluron 02 Jun 2014 at 6:58 pm

    As I mentioned about Gerhard Schultz in the character development article , Gerhard’s abilities gradually changes the way he thinks , in the beginning he is very impulsive and acts before he thinks however knowing the damage his strength could cause (learning the hard way) he gradually becomes more cautious and meticulous as the story progresses.

  28. Aj of Earthon 10 Feb 2016 at 11:36 am

    I very much enjoy this article. It’s especially relevant to where I am in my manuscript (just shy of the climax, dig it.)

    Spending so much time developing and learning characters, in addition to my first-time attempts at building a plot, handling structure and arcs, yada… It’s still a challenge to make sure I’m effectively folding the perspective of superpowers into that mix. I feel I’ve been doing my best job certainly, but revisiting this article I find there’s a lot of opportunity to develop that even further. I appreciate that. Interestingly, several of the examples given here feature a superpower that I’m using in my work. Heh, right on.

    Another great read! Really, thanks again for the amazing site!

  29. Liviniaon 14 Feb 2016 at 6:23 pm

    I really like the idea of a person who communicates with plants only eating from plants that can survive the loss of the food. It’s actually a diet called fruitarian but it’s not the most healthy. Strict fruitarians only eat fruit though I’m not sure why. Others eat nuts, seeds, fruits, and some vegetables. However, weight loss is a severe risk. Additionally, fruitarians frequently (from my admittedly deficient position) have nutritional deficiencies, aside from the normal vegetarian/vegan deficiency of B12 which is in those cases made up by taking a supplement. I’m not sure if B12 supplements are fruitarian, though. Fruitarians have had their children taken away in the U.S. for the nutritional deficiencies that can occur. I don’t know if it’s workable but it might be interesting to try out in a story.

  30. LuckyClockworkon 01 Apr 2016 at 3:40 pm

    I’m developing an idea for a story, and I’d like your feedback.

    Shortly in the future, the entire Internet and computer industry will belong to one mega corporation (which will be a mostly original but still fairly obvious stand-in for Google). But far in the future, the supposedly “too big to fail” company crashes, leaving the world, which by now is extremely dependent on the Internet – to the point that most people are hooked up to it from birth and can barely survive without it – crippled and just barely functioning.

    To replace the massive network of servers, clients and nodes, they build a massive machine built out of a haphazard combination of old computer parts and more traditional “mechanical” parts, such as gears, levers and steam-powered everything. (Yes, I do love the steampunk style. So sue me. 🙂 ) This is hooked up to people’s brains as a last resort to fix the old microchipping system, so that no one dies as a result of being cut off from their digital parts.

    But there’s a problem. The former leaders of the mega corporation, used to a lavish lifestyle and the ability to bring the wildest dreams of science to life with a command to their creative legions, were plunged into poverty and desperation when their company crashed. One especially desperate former executive – basically the one with the most mouths to feed – is trying to take over the operation of the machine and thus rebuild his beloved company. Problem is, he’s an executive, not a programmer, and he certainly has no experience with steam power. So in his attempts to reprogram the machine to suit his own ends, he is slowly destroying it with his incompetence. Every now and then, the machine simply stops. When it does, everyone hooked up to it (which is *everyone*) automatically goes into a protective, amnesia-causing coma, including him. Completely shut off, no one remembers the increasing periods of time during which they were essentially dead.

    One day, however, the executive accidentally activates an ancient program in one of the old servers built into the machine. It is an incomplete AI, designed to be self-aware and a perfect “programmable friend” (read: slave). When he, with the help of a friend and fellow executive (who is brilliant, as a foil to his mostly average) discovers what he has found, all the former executives realize the cash potential on this old invention, and they, along with a few of their most loyal programmers, form a secret brotherhood to finish and replicate this AI, which will catapult them back to wealth – or at least out of poverty.

    The AI is the protagonist. It (still debating on assigning gender) starts out self-aware but emotionless, moral-less, and very confused. There will definitely be refrences to Isaac Asimov. However, there will be an inciting event (not decided yet; any help?) which will cause it to become separated from its “masters”, whether on purpose (escape) or by accident, and be forced to travel by “zapping” from computerized part to computerized part, often having to cross dangerous areas of degraded computers or even pure mechanics.

    At some point, it will discover that it can also “zap” into the chips in the heads of humans, effectively possessing them in the process. But the chips are intricately connected to the host’s human brain, and the more time the AI spends in a human brain, the more human it becomes, picking up things like a *limited* (it is still an AI) ability to experience and process emotions, a morality, and other human ideas & experiences (other human traits it might pick up? Anyone?) It may also pick up parts of the host’s personality and perhaps even memories. If it is separated from its “masters” by accident, one of the most important things it will develop is a sense of freedom.

    Its goal will be to escape the corporation and either A. build itself a body and blend in as a human; this will be a slight “downer ending” because it will have to fake its identity for the rest of its life, eternally avoiding its former “masters”. or B. somehow defeat its enemies and gain rights as a sentient being in its own right; this would be more akin to the traditional superhero story, but it feels cheaper to me. The original idea was to have it also save the world from the literal “ticking down clock” (haha, get it?), but I may cut out that storyline and have it be a more personal-redemption type thing. It would be the heroic thing to do, but at this point I’m not sure how heroic my protagonist is going to be.

    Wow, this is a really long comment. I’ll wrap it up with a plea for feedback: How can I develop my protagonist and antagonists? Any plot holes, cliches, or other mistakes you can see? What should be the inciting event for my AI’s story? Other feedback?

    Thanks!

  31. Tyleenia Tayloron 01 Apr 2016 at 4:46 pm

    Hey, Teal? That’s what one if my characters can do. Her dad experimented on her and now she’s pretty much a plat that looks and acts like a human. She can understand the plants, has sap-like blood (maple syrup, anyone? !-p) and her appendages can kinda split into vine-things.

  32. B. McKenzieon 01 Apr 2016 at 5:27 pm

    “But far in the future, the supposedly “too big to fail” company crashes, leaving the world, which by now is extremely dependent on the Internet – to the point that most people are hooked up to it from birth and can barely survive without it – crippled and just barely functioning.” I feel like this world is so reliant on a single company that it might raise “idiot plot” problems. When the world’s population signed over their bodies to a search engine, did they at least get something kickass in return?

    “Problem is, he’s an executive, not a programmer, and he certainly has no experience with steam power. So in his attempts to reprogram the machine to suit his own ends, he is slowly destroying it with his incompetence.” I’d suggest avoiding incompetence here as an explanation for a major plot event because I suspect it may reinforce the idiot plot vibe. May help to consider alternate reasons for mechanical damage here, maybe something like desperation combined with experts that are poorly equipped and poorly trained for the circumstances in which they now find themselves.



    “Its goal will be to escape the corporation and either A. build itself a body and blend in as a human; this will be a slight “downer ending” because it will have to fake its identity for the rest of its life, eternally avoiding its former “masters”. or B. somehow defeat its enemies and gain rights as a sentient being in its own right; this would be more akin to the traditional superhero story, but it feels cheaper to me.” In a work with billions of people are completely ****ed over by a loss of most technology, I feel like whether a robot can live honestly/openly is probably not going to feel very high-stakes?* On this front, option B feels like an improvement, but I think any ending in which billions of people are still ****ed will probably be more than slightly dark (which wouldn’t be a problem as long as you and readers are on the same page).

    *Counterexample: Bladerunner/Do Androids Dream of Electric Sheep? made it hyper-interesting, and I don’t think it could have worked without the tragic ending.



    Out of all of the things that a robot could seek, I feel like freedom and/or recognition as an individual/equal are probably the first that come to mind. I’d recommend considering more unusual quests, hopefully something that distinguishes him/her from most other AI protagonists. E.g. he/she wants to become a grandmaster painter or train therapy dogs or become a soccer player* or something else that resonates with him for whatever reason (e.g. delivering the mail in The Postman or becoming a chef in Ratatouille or the Wall-E / EVE / Hello Dolly love triangle or becoming the world’s greatest assassin in Sisterhood of the Traveling Pants**).

    Alternately, maybe humans/programming gave him a particular function, but it’s one that no longer feasible (e.g. because most of the technological systems that he could have used are down). As an intermediate quest, maybe he sets about turning the lights back on and getting the systems back running. Then the bigger question would probably be what he does when the systems are back on. Does he set a new course for himself? Are the humans that helped him get as far as he has ready to let him become, say, a painter or leave to find himself with millions of lives on the line?

    *Darker than Blade Runner.
    **Only to find out that her best friend has stolen it for herself, the tramp.

  33. LuckyClockworkon 01 Apr 2016 at 7:13 pm

    Huh. Thank you for your feedback.

    When you first mentioned the “idiot plot” part about the company, I confess I rolled my eyes a bit. It may be an idiot plot, but it’s very realistic. Imagine what would happen if the Internet crashed. Can you? Okay, stop. You’re going to have a heart attack. But when you specified “When the world’s population signed over their bodies…” My idea of the population chipped into the Internet and reliant on it for the order of their lives was realistic, but you made me realize that the idea of people’s entire physiological functions being dependent on their chips is way too far-fetched. They will have to be less powerful over people. Thank you for mentioning this.

    “Incompetence” really is the buzzword for bad writing around here, isn’t it? He’s not supposed to be incompetent in general; the idea is that he’s a businessman; he’s not trained to actually handle programming, but that he’s desperate enough to try. But “May help to consider alternate reasons for mechanical damage, maybe something like desperation combined with experts that are poorly trained and poorly equipped for the circumstances in which they now find themselves.” was very helpful. It occurred to me that someone who has no experience or training would probably not be allowed around the machine that basically runs the world. Perhaps the person who actually tries to infiltrate the machine – and who discovers the AI – will be one of those loyal programmers I mentioned. The executives will be more planning and less on-the-ground. On the other hand, I like the idea of avoidong “evil overlord” cliches by bypassing the “evil overlord” altogether and having the formerly powerful villains have to work on the ground like mooks – a job they’re not particularly good at. Still, it’s a worthwhile concern.

    “In a work with billions of people are completely ****ed over by a loss of most technology, I feel like whether a robot can live honestly/openly is probably not going to feel very high-stakes?” Um… I’m confused. I don’t understand what you mean here. If people have lost access to the technology they once relied on, wouldn’t they be all too happy to enslave an AI to replace it? Or am I missing something here?

    “…but any ending in which billions of people are still ****ed will probably be more than slightly dark” You know, I really hadn’t thought of how ****ed these people’s lives would be in the world I designed. I was so focused on my main plot & characters that I hadn’t considered how the setting affected everything. I definitely want to have my setting be improved by my plot. But I don’t want it to be too happy, either. I certainly don’t want the major flaws of the society (i.e., over-reliance on technology, accepting-ness of poverty and slavery) to be fixed by one rogue AI. On the other hand, I do want this rogue AI to challenge the precepts of said society. I want to have something in between “the society is in ruins after all technology is wiped out” and “the AI returns everything to status quo.” Actually, perhaps it would actually be a good idea to have the clock continue ticking down – it would give the people an impetus to transition their society. Remember, the protagonist would have a body by the end of the book, which would keep it from dying if the machine shut down.

    Now that I think of it, the need to have a body by a certain time limit would add an interesting dimension. Perhaps, along with the whole “not dying” thing, it would want a body to feel and sense things – something that could add something like a flaw… maybe, once experiencing things like taste, smell and touch, it would become overly sensual and crave such feelings? Maybe, since it would occasionally be possessing people, it could even commit some questionable acts while in their bodies? After all, ethics would probably be one of the later things it would acquire. Is the idea of a nonhuman character who wants to feel human sensuality too worn?

    Your last two points are fascinating and require further thought. You’re right, I hadn’t thought about how overused that android plot is. I will definitely consider a different goal for it. What it could be, though, I’m not sure. I will need to think more.

    Your last point is also a very good idea. It is in general a good thought – though I will have to think about what particular function he was programmed for. Also, it made me think. My first thought was to have the “build a body, become a recognized being, etc.” as the intermediate goal, and then, once it has built humanity, test said humanity by having it fix the machine. But it’s not a very good test of humanity, and it’s something that would probably be your generic andorid’s first thought. In the beginning, it wouldn’t have much ideas of self-determination, so it would first set about cleaning up the machine and “getting the lights back on,” as you put it. In the process, it would discover how to “zap” into people’s chips, and once it did that, it would start learning human experiences and behaviors. Once it had realized its own potential, *then* it would try to create a life for itself. It may even have to make the decision to pull the plug on this machine that dominates society, to force people to live without technology. A morally ambiguous choice like that would be a better test of its humanity.

    What do you think?

  34. B. McKenzieon 02 Apr 2016 at 1:25 am

    “Incompetence” really is the buzzword for bad writing around here, isn’t it?” I don’t think it actually comes up all that often. Of our 32,400 comments, only 123 (.3%) have included “incompetent” or “incompetence.” “Personality” came up 10x as often, “plot” 20x, “cliche” 5x, “superpower” 10x (more than deserved), “fight” 20x, “developing”/”development” 10x, “confuse” 5x, “dialogue” 4x (far less than deserved), and “rocket propelled rickshaw” only 2 times in 32k comments (a cosmic injustice).



    “you made me realize that the idea of people’s entire physiological functions being dependent on their chips is way too far-fetched.” Alternately, if people do give a company that much access/trust, maybe there was some huge benefit. (E.g. embeddable implants to enhance neural processing, mind-machine interfaces for interacting with machines, heart monitors which can call emergency services when necessary, etc).

    Ack, migraine oncoming. Will hopefully remember to respond to core of comment later.

  35. LuckyClockworkon 02 Apr 2016 at 7:10 pm

    Ugh, migraines are the worst. I hope you feel better. Try drinking a little bit of lemon juice (without sugar). I’ve found that sometimes that helps me for whatever reason. I will await your response.

    “Rocket propelled rickshaw?” Cosmic injustice indeed. Do you know if that was actually used in a story? Because I would certainly like to read that.

  36. B. McKenzieon 02 Apr 2016 at 9:53 pm

    “Do you know if that was actually used in a story? Because I would certainly like to read that.” It was offhandedly proposed for a work in progress.

  37. B. McKenzieon 03 Apr 2016 at 10:36 am

    “It may be an idiot plot, but it’s very realistic.” I’d argue that incompetence tends to come up less for main characters and main antagonists in fiction than in reality because it tends to make conflicts less satisfying. However, if you can make the conflict interesting anyway, it might not be an issue. If not, I’d recommend being unrealistic if that’s what it takes to avoid an idiot plot.

    “In a work with billions of people are completely ****ed over by a loss of most technology, I feel like whether a robot can live honestly/openly is probably not going to feel very high-stakes?” Um… I’m confused. I don’t understand what you mean here. If people have lost access to the technology they once relied on, wouldn’t they be all too happy to enslave an AI to replace it? Or am I missing something here?” If something’s being missed here, it’s probably by me — I’m not sure I’m following the primary conflict. As far as I understand, (not exceptionally competent) remnants of a company are going to enslave an AI to fix the Internet? Hopefully it’ll feel more intuitive in-story, where there’ll probably be more context. E.g. depending on what the AI had been set up to do… e.g. if this AI was created as the contingency plan if the Internet went down or was otherwise related to something critical to fixing the technological systems affected, then it’d feel completely intuitive that humans might want to enslave it to fix the Internet.



    “maybe, once experiencing things like taste, smell and touch, it would become overly sensual and crave such feelings? Is the idea of a nonhuman character who wants to feel human sensuality too worn?” I wouldn’t recommend spending a ton of time on it, but I think it’d be okay as a short-term goal. Maybe the AI initially thinks that sensory experience is the key to being human and quickly gets disappointed on that front.



    “It may even have to make the decision to pull the plug on this machine that dominates society, to force people to live without technology. A morally ambiguous choice like that would be a better test of its humanity.” That sounds pretty badass.

  38. LuckyClockworkon 23 Apr 2016 at 3:55 pm

    Thanks so much for all your help. What I have right now is:

    A major part of this story which perhaps I’m not getting across so well is that it’s meant to contain a warning. That’s why I said the MegaCorp would be “a mostly original but still fairly obvious stand-in for Google.” This is a world that is basically our current technology-dependent society taken to its logical extreme. People are not physically dependent on the Internet, not really, but they have all been hooked up to it since birth through brain-implanted microchips that allow them access to the Internet at any time just by thinking, and so have their parents, and their parents. They have simply forgotten how to take care of themselves or figure things out without the help of the Internet.

    This includes the former executives of the MegaCorp, who are not only screwed over by loss of technology, but by their company crashing. They’re in debt, jobless, and can hardly feed their families. They’re meant to be in a pretty pitiable state, to drive home that their actions are not out of stupidity, but desperation. They are used to a lavish lifestyle and a massive amount of power over the creative output of the society – think being in charge of everything that goes in Silicon Valley. But now not only are they in abject poverty, but the overwhelming majority of their former subordinates hate them and blame them for the collapse.

    However, there are a very few lower-level programmers who are still loyal to them and believe that they can still rebuild – and that they are the *only* ones who can rebuild. There’s almost a cultish thing going on here. Some of these loyal programmers are currently employed at the machine, trying to keep things maintained and running. However, due to the, you know, *virtual collapse of society*, there are not enough resources to keep the machine running, and so it is beginning to fall apart. There is a sense of a ticking down clock.

    Now remember that back when the MegaCorp was still up and running, they were constantly working on new technological projects – you know, like there will always be a new version of IPhone, no matter how *perfectly fine* the old one was. And one project that had been finished shortly before the collapse, but never put on the market due to said collapse, was a functioning AI designed to be a slave to humans, with self-awareness added in to make its interactions more “authentic.”

    The machine that currently runs the remnant of society is built out of, basically, junk parts. It’s a kind of hybrid of mechanical and computerized parts. Many of the computerized parts used to build it are re-used computers that once belonged to the Mega-Corp. Saved on one of those computers is the program for the AI. It’s an AI rather than an android at this point, because it’s just software without hardware – a mind without a body, to put it in less geeky terms. Pure programming. Just lines of code.

    However, one of the loyal programmers stumbles upon the unused and dormant program while doing maintenance on the computers, and when they open it, they discover that it is a *self-aware* and completely subservient AI designed to serve humans. Its function was not to “fix the Internet,” but to basically be your friendly robot slave.

    This being one of the cultish loyal programmers, they realize immediately how much money the executives can make off of selling an AI like this. Everyone will want one. Sales of this will probably be lucrative enough that the executives can pull themselves out of poverty, and start their technology company again from scratch, which the programmers believe will “fix” society.

    Once the AI is fully awoken, they start testing its capabilities, and one of the first things it does is start to do its best to repair the machine. Despite having no body, it quickly learns to maneuver through cyberspace by “zapping” from computer to computer in the form of a signal – since it is just information, and sending information across distances is exactly what computers are made to do.

    However, travel through the machine will occasionally become dangerous, since many of the computers have degraded enough that they could damage the AI’s programming, which means travelling through the machine will be like a potentially deadly obstacle course for the AI.

    Eventually, it will figure out that it would be safer to “zap” into the microchips in the brains of humans. But when it does that, it will effectively *possess* the human in whose chip it is inhabiting. Overwhelmed by the sudden access to human emotions, sensations and experiences, it will become confused and end up travelling through many people’s chips. Since the human’s memories, emotions and traits are regularly uploaded into their chips, every time it “zaps” into a chip, it accumulates some of the experiences of the human who owns said chip.

    Over time, it develops a limited capacity for emotion and empathy. From the chip of a young woman who is madly in love with her boyfriend, it experiences sensuality for the first time. For a sub-arc of the story, sensory experience becomes its goal, until from the chips of several other people, it experiences variously grief, pain, illness and addiction.

    From the chip of an old man, it experiences the memory of the man’s parents, who were two of the last people to have a garden. It develops an intense longing to have a connection with nature, stemming from the old man’s (unfulfilled) last wish to walk in the woods one more time before he dies.

    Another quality it develops is a sense of self-worth and of the value of freedom. Its former masters have been searching for it, and when they realize that it is actively running from them, they panic. In their desperation, they cross the Moral Event Horizon and go from desperate but mostly harmless opportunists into desperate and violent villains, willing to go to lengths such as *kiling the host and ripping the chip out of their head* to try to re-capture their slave.

    The AI’s goal, meanwhile, is to build itself a body that can pass for human, so that it can escape its pursuers, and to settle down in a place where it can own a garden. Peace and a garden are really all it wants, but when it realizes just how doomed human society is if it continues to rely on the machine, it resolves to destroy the machine once and for all so that humanity will be forced to relearn how to live without being totally dependent on technology. In the process, it will have to defeat its former masters so that it can destroy the machine.

    It will be left ambiguous whether humanity survives.

    What do you think? Have I resolved the “idiot plot?” Does it feel fresh? Any more feedback?

  39. Cat-vacuumer Supremeon 27 Sep 2016 at 5:56 am

    It sounds like a good plot, Clockwork. Maybe the executives are also trying to keep it secret? Maybe word leaks out and there is paranoia and AI hatred all around?

Trackback URI | Comments RSS

Leave a Reply