Skip to main content
SearchLoginLogin or Signup

Spidey Sense

What if we had enhanced reflexes and an extra "sense" that didn't rely on our other senses?

Published onNov 04, 2019
Spidey Sense

While usually overshadowed by other superpowers like superhuman strength and speed, many superheros also amplify their powers with enhanced reflexes. It’s typically just an extension of their superpowers that’s just taken for granted since we focus on the other things that they can do. However, there’s one superhero that highlights those superhuman reflexes (as well as other certain iconic powers….) and adds in the enhanced strength and speed as a side note. Who might that be? You guessed it, your friendly neighborhood (or masked menace, if you prefer) Spider-Man!

While wall-crawling and web-slinging would be cool, I’ve seen some YouTube videos by HackSmith covering some ideas of how they could be implemented: Note that these would still be restrained by being to climb only metal walls and there’s still the need to develop actual webbing and functional web shooters. Regardless, I’d like to tackle how I would perhaps implement the most abstract of Spider-Man’s powers, his Spidey Sense.

Essentially, my idea combines two ideas: computer vision and bio-electric stimulation. First, some examples of each that inspired the idea to help you see my vision before I explain it.

Computer Vision- Mark Rober’s Automatic Bullseye Moving Dartboard(

Neural Stimulation- “Mind Control” idea from Netflix’s White Rabbit Project S1:E1 Super Hero Tech (

The idea is situational, but the basic idea gets across and you could then scale it to be even better. The specific application of this idea would be to be able to catch a ball while blindfolded. A camera would track where the ball is, then rather than sending commands to a robot, it will send electric pulses to specific areas of the body to cause them to move to the proper location to be able to catch the ball.

Flowchart of the system’s process

Let’s start with the camera. While there are many systems that have great tracking cameras, we would probably want to train our own system on our specific application first: a ball coming towards the camera. We would throw a ball at or near the camera over and over again until the camera, paired with the software, can successfully track the ball when it first is thrown and calculate the ball’s trajectory in order to know the location in the plane around the camera to which the ball will travel to, all before the ball actually even gets there in order for there to be enough time to send the correct responses to catch the ball. While this is a very specific example, you could train the camera, as well as an added array of sensors, on different “threats” that might activate one’s “Spidey Sense,” then begin work on the appropriate responses. The camera could be mounted somewhere on a mask or the head, perhaps over the eyes, but also on the back of the head for some more interesting tests and applications.

You would then need to train the neural transmitters quite extensively in order to have an accurate map of which muscles need to react and how intensely for any position that the ball will travel to. This could be accomplished by attaching a large array of sensors all over someone’s arms and back, then start throwing a ball to them over and over again. The camera’s prediction accuracy can be tested at the same time, but the “catch location” can be recorded and mapped to the group of muscles and their respective intensity parameters for each of those locations. Once the sensors have collected enough data, you can then try to reverse the input and output and try to electrically stimulate those muscles based on different catch locations. By plugging in a catch location, the neural transmitter should activate the correct muscles in order to get the user’s arms and hands in a position to make a successful catch. Of course, this last step could prove to be tricky because the body may have different reactions, at least subconsciously, to being able to see the ball and directing the body to catch it opposed to being blindfolded and having an external source tell the body to move to a certain location in anticipation of the ball. There’s something about the difference in being able to actually see it than to just having an automatic response to it. I’m not sure how precise the neural transmitter can get, such as controlling each individual finger for optimal catching probability, but adjustments and fine-tuning can be done. It does beg the question, however, if the neural transmitter was good enough to make such a motion automatic, how would the body adapt to such an added sort of sixth sense? Would it accept it as part of the body? Or would you never be able to get used to it as the body resists the impulses provided from the external source?

In summary, the tracking software and systems already exist, it would just need to be fast enough to calculate trajectory and which muscles to activate. The hardest part would be improving the neural transmitter to the point that the exact muscles would be activated and moved to the proper “catching” position. Safety hazards are something to be aware of, especially in terms of nerve sensitivity and perhaps deeper neuro-psychological reactions. A final sketch of the system is below.

Mock-up sketch of possible design prototype highlighting positioning and connectivity

Obviously, this is only one specific application of enhanced reflexes, specifically sensing an object without actually seeing it, and still being able to react to it like you normally would. This is just a feasible prototype, after all. Imagine the possibilities, though! Now you, too, can say, “My Spidey Senses are tingling!”

No comments here
Why not start the discussion?