|
Normal vision |
Since pictures substitute for words in many cases, I thought I would try to illustrate how I'm seeing at the moment. I'll build up separate images with each effect of the brain damage, so you can get an idea of how things look to me. I'll stick to the images that my eyes are sending back to my brain, which perceives the world slightly better, but works poorly with text.
First, here's the image I'm going to work from. Bear in mind that most of you would see this in 3D, but I don't. It's the cabinet in my living room that's full of games and game books. Notice that it's not hard to read
Hoity Toity or
Goa, and many of you will recognise
Settlers of Catan.
|
Myopic version |
I have been myopic (short-sighted) for over 30 years, so without glasses, the view would be blurred. I am
very myopic so things are very fuzzy. Not so bad that my glasses look like the bottoms of milk bottles, but headed in that direction, for sure.
Of course I wear glasses, and used to wear contact lenses, so the world is not so blurry until bedtime, but since the strokes, I have had double vision (diplopia) which confuses things. I have diagonal diplopia, which is oddly more confusing than just horizontal or vertical displacement. Because my diplopia is rooted in the coordination and function of my oculomotor muscles, the muscles that move my eyeball, rather than in the visual cortex, the displacement is not fixed, it swims around.
|
Diplopia version |
I also have
1.5 syndrome, which for our purposes here means that the displacement differs depending on where an object is in my field of vision. The syndrome has affected my left
laterus rectus primarily, but some of the other muscles too, so the doubled image is both translated diagonally, and slightly rotated. The image to the right is a reasonable approximation to what I would see wearing glasses, almost. Read on for more wrinkles! It's not quite accurate because my image manipulation skills are not mad enough to convey the contention of the two images. You can clearly see the
Goa there, but the picture doesn't give a sense of the other
Goa wanting to be read first and foremost.
To manage the diplopia by reducing my visual signal to a single useful image, I have an occluding filter on the left lens of my glasses. Or, in English, I have a disc of translucent plastic that sticks to the inside of my left lens and makes it all blurry. Peripherally, my left eye has the benefit of my glasses without the filter, which should be of some use looking down, but because of the 1.5 syndrome, is not much cop on the left side. Dashing as a patch might make me, keeping my left eye essentially uncovered like this keeps it moving in (broken) concert with my right eye, and can only help with recovery. The effect of the filter is that I see something like the image below.
|
Occluded version |
The irony of making one side of my vision so useless that my brain ignores its input is not lost on me. It's not always effective, either; sometimes, especially when I am tired, my brain seems more interested in the garbage coming from my left eye than the more useful stuff from my right.
The final, and frankly most debilitating visual deficit I have is oscillopsia; my right eyeball moves up and down a little, rapidly and regularly. As the size of the movement is usually at least a line of regular text at my comfortable reading distance, this makes reading difficult. It's also better or worse variably, but it's never better than below in terms of distance moved. The jitter is also quite a bit faster in my eye than in the animation.
Of course, it's not the eyes that do the seeing, it's the brain. What I've tried to illustrate is what my eyes are sending on to the brain. What I actually perceive has changed over the last seventeen months, so I'm better at seeing a steady field of view, especially for rooms that are familiar. It's worth noting that I'm discarding the information from one eye completely, so I don't have binocular vision. I can't tell how far away things are, how fast they are moving, or how flat they are. Sidewalks are exciting! One of the notable effects is that the brain evidently retains a working model of the world, and uses that when there's no depth or distance information available: I can reach for a glass that I put down recently, but if you move it, I will miss.
|
Oscillopsia version |
I seem to have learnt that the world is not constantly shaking, so the oscillopsia is usually (but not always) ignored in familiar spaces, but it is in full effect when I try to read. One trick I have used to read signage or subway advertising and the like is not to try to read a line of text from, say, a poster, but to allow the image of the text to be assembled coherently in my brain, and to read it from there. This trick is fiddly to execute!
One other thing I have learned from my ocular odyssey is that I infer a lot of the actual text in many novels; the word choices are so often obvious that I don't actually read them, but assume the word given the context. This is less easy with non-fiction, and I find any reading tiring regardless.
EDIT: Over on Google+ a friend was asking how I could even prepare these images given the way my eyesight is at the moment. I've had longer for my brain to get used to the way my eyes behave, I know what my eyes do, and I'm pretty familiar with my computer. The image manipulation is fairly simple. It still took more than a day, mostly because I was tired. Looking at unfamiliar things is most like the last image.
Later, I likened it to looking at things from a very rattly moving vehicle. Distance stuff is easier and more solid, because the brain is used to the eye moving rapidly over near things and assembling a distant image. That image is not always correct, because the brain interpolates things (one of the reasons that a lot of eye-witness testimony is bollocks), and sometimes it fills in the gaps with things that should or might be there, but aren't.
Nearer stuff is impossible to perceive at a glance because I am having to do pretty much the same thing for near vision as the brain does for distance perception, and that's not instantaneous. It takes me enough time to check cross streets for oncoming traffic that I might as well just wait for the lights. I almost never jaywalk alone. Similarly, as I'm typing, I know what words should appear, and in what font, so my brain has an easier time of adding new words to a page of typing. Reading a previous paragraph, though, sets me back again.
END OF EDIT
As usual, questions are welcome!