Posts

Erica’s interview with Center for Data Innovation on how she uses AI in the Sensory Science artworks

(Centre for Data Innovation) ‘5Q’s for Erica Tandori, an Artist, Researcher, and Academic at Monash University

The Center for Data innovation spoke with Erica Tandori, an artist in residence at Monash University in Australia, who has low vision and is using AI to create multi-sensory art experiences that showcase the wonders of biological life. Tandori discussed how data-driven technologies are helping her create art exhibitions that explore science and biomedicine, enabling greater inclusion, accessibility, and education for low vision, blind, and diverse audiences.

Hodan Omaar: What challenges do people with low vision or blindness face in accessing science, and how are you using artificial intelligence to better communicate scientific ideas?

Erica Tandori: People with low vision or blindness face the challenges of comprehending a  predominantly vision-driven world. In science and biomedicine, reliance on data derived from particle accelerators, X-ray crystallography, and microscopes is critically dependent on the ability to see. But this dependence on vision may be unnecessarily limiting. For instance, the sonification of data, which uses non-speech audio to convey information or perceptualize data, may be an opportunity for those with low vision or blindness to access meaningful information through hearing, and has also been useful for those without impairments.

A multisensory approach to scientific information through art presents new opportunities and possibilities for data representation, even in fields such as astronomy. A recent example of incorporating data in a multisensory way I’ve worked on is the HIV Capsid Data Projection Project, a work created with interaction designer and video artist Stu Favilla. This 5-foot interactive sculpture (shown on the left below) is made up of individual hexagonal and pentagonal tiles smothered in tiny foam balls, simulating the structure and surface of the viral HIV protein. We projected computer-generated molecular structures onto the sculpture as a way to present and express information about the virus itself. To incorporate the fact that HIV virus evolves rapidly, we mutated the HIV RNA protein set (the compound that carries genetic code) based on the Markov chain model, a stochastic model that describes a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. We then color-coded the HIV RNA protein set for projection. I was inspired by the classic Horton-Conway game Life, a simulation where cells form various patterns throughout the course of the game based on the initial conditions of the cells, and used a two-dimensional array of artificial mutating RNA to seed an algorithmic life pattern into the piece. This was intended to represent a flow of synonymous gene mutations and the viral transmission from one host to the next. Finally, we set the stage for these rhythms of light and dynamic mutations to a pulsating dance beat, as though this giant viral capsid was taking to a dance floor, for an online exhibition. We took a little poetic licence with the music given we were holed up on strict lockdowns due to COVID-19. For future exhibitions we intend to align data sonification of the viral mutations with the visual data projections on the surface of the sculpture.

Images by Erica Tandori and Stu Favilla. On the left is 5 foot interactive sculpture of an HIV capsid and on the right is an image of SARS-CoV-2 RNA producing colour coded renderings on top of fractal video shaders, incorporating data into Sleeping Pangolin and other sculptures.

The Sleeping Pangolin sculpture is another work we are incorporating AI and data into, with screens that show mutating viruses created by Stu Favilla. Here, the SARS-CoV-2 RNA is producing color-coded renderings atop of fractal video shaders (visualization is shown above on the right). The work is high-resolution, creating multiple displays from an initial 6K video rendering. This way, a single video rendering is distributed to a series of sculptures including a pangolin (shown on the right below), bat, and civet cat. We are exploring the use of 3D sound sonification in this work also.

If we can create works that explore data projection in new ways free from the screen, we may also be able to heighten accessibility to data, creating new understandings and new meaning.

I have marvelled at the way making science accessible to those with low vision and blindness has also made science more accessible to everyone, no matter what their age or scientific literacy. Imagining a world in which there are no obstacles to information and knowledge access could significantly improve people’s ability to pursue careers that they might have thought impossible for them. If we can represent information and data in new ways, we may be able to reconsider notions of disability altogether. Inclusion and diversity in our workplace and in the scientific arena creates both equal opportunity and innovation. This has been a driving idea behind our Monash Sensory Science initiative, an exhibition program specifically designed for those with low vision, blindness, and other disabilities. Professor Jamie Rossjohn of the Rossjohn Laboratory where I work as artist in residence, was well aware of the limitations for those with low vision and blindness, and the lack of university science research outreach programs catering to this demographic.

Our exhibitions have been travelling across Australia since 2018 and we have engaged audiences of all ages and scientific backgrounds in the hopes of creating community awareness about biomedical research, and fostering the idea in young students that they too can pursue careers in science no matter what their perceived limitations might be. We have created these exhibitions to enable people to access information, ideas, and research around biomedicine in multisensory ways that include visual, tactile, and audio approaches. In our technology driven world, so much is screen-based. This presents both a problem and an opportunity for those with low vision or blindness. Vision-based data is devoid of any other sensory experience apart from sound, as screens are not very interesting to touch, hold, or interact with. On the other hand, screens with zoom functionality enable those with low vision to see data at greater magnification, as well as access text-to-speech functions. If we can release data from this screen-bound environment, utilizing AI and multimodal delivery, we may be able to further eliminate obstacles to data access in myriad ways with enormous implications for diverse needs.

Omaar: Many applications intended to assist people who experience blindness or low vision are not trained on data created by people with these disabilities. For instance, some algorithms that assist people with disabilities (PWD) to identify objects in photographs are trained only on photographs taken by people without disabilities. Why is it important for these types of AI systems to be trained on pictures taken by PWD?

Tandori: This is an excellent question and one that touches on so many issues around low vision, blindness, and the lived experience of disability. This is really a question about how much we allow those with disabilities to speak about their lived experience, their needs, and their self determination.

My PhD focused on the lived experience of how blindness looks. It was, as far as possible, an eyewitness account of vision loss and how the world appears from my perspective. How many of those in the medical profession know what vision loss or blindness actually looks like? Far from the blackness we might imagine, macular degeneration and other conditions of the retina can cause an absence of vision, flashing lights, metamorphopsia (a perceptual distortion that causes linear objects, such as lines on a grid, to look curvy or rounded), or a myriad of entoptic visual artifacts caused by the interaction between disease, the environment, and the brain. In medical texts, journals, and the wider media, my type of vision loss is portrayed as a static black spot at the centre of a perfectly defined visual field. But my vision loss is dynamic and ever-changing. At times, the visual field resembles a cross between a softly focused, light-filled Monet painting and the writhing, explosive brushstrokes of a van Gogh. The centre of the field in the scotoma (a partial loss of vision or blind spot in an otherwise normal visual field) is an experience of absence, where cortical completion fills the area of nothingness with surrounding color (yes, color!) and pattern.

In my piece Invisible Mona Lisa (shown on the left below), you can see the processes of cortical completion taking place—the area of the face is concealed by the scotoma , and the area where her head should be is filled with background color. My brain is working hard to make sense of the blind spot, actively completing the picture for me. This is largely how I see others, their faces obscured as though partially camouflaged.

On the left is Erica's "Invisible Mona Lisa," an artwork exploring cortical completion and absence at the scotoma, and on the right is an interactive sculpture of a sleeping pangolin incorporating data screens.

Medical technologies have been unable to determine how entoptic symptoms of macular disease appear. It is by listening to the voices of those with lived experience of disability that we can come to better understand their perspective. If those voices are included in all aspects of society, from the medical, technological, and cultural aspects, we may find more effective solutions and innovative approaches that create suitable environments to minimize the impact of disability. Weaving the voices and lived experience of those with disability into the loop of AI systems would be hugely beneficial.

As an aside, some of the artworks I have created have completely confounded object recognition software, which has resulted in some hilarious outcomes, and may have implications in regard to AI, accessibility, and art for people who are blind. AI completely misrecognized my Sleeping Pangolin sculpture (shown above on the right), for example.

Omaar: Throughout your artistic and PhD career, you have explored ideas surrounding the dualism of vision: what it means to see as opposed to what it means to have vision. Is it fair to say that the work you do is augmenting people’s vision, i.e. using digital tools to help people imagine things in their mind’s eye?

Tandori: I have used digital tools to help people imagine how things look from my perspective and tried to give an eyewitness account of my own macular dystrophy as accurately as I possibly can. However, there is a conundrum in this that is completely unresolvable.

If I look at the artwork I create, I am always looking at it through the lens of a diseased retina. It is always going to look like an accurate portrayal of vision loss to me because I am seeing it through my own vision loss. The only way to really know if I am accurate is if I take out my eyes and put in a healthy pair, which we know is completely impossible. I have to work around this conundrum, utilizing my peripheral vision and zoom technologies on my computer screen to see as much as possible of what I have created.

There are many dualities in this task, from exploring the loss of vision through the visual language of art, the inescapability of creating works about vision loss through the prism of vision loss, observing my own observing and my diminished ability to observe, and the duality of creating images that look at “normal” vision and comparing it to “diseased” vision. Moreover, we have the dichotomy of seeing and vision, of the mind’s eye and the retina.

During my PhD (see some of the artworks here), I tried to create as accurately as possible, a vision of my vision loss. Using a digital camera, I took images that might convey how something looks to those with “normal” vision. Sometimes the camera became the “perfect eye,” able to catch the scenes that I was unable to see by myself. I would then take these digital images and augment them using Photoshop and After Effects, to simulate my eye disease. In this way, I was trying to explore the gap between seeing and non-seeing. To my surprise, this could not be done with one single method. Digital tools could not express all the complexity of deteriorating vision. I needed to employ traditional methods of art-making in combination with digital technologies to fully articulate the range of symptoms. Moving back and forth between traditional and digital methods created a dialogue between analogue and digital ways of understanding the phenomena of vision loss. Oil paint, canvas, paper, and pencil create texture and visceral qualities which are absent in the digital image. Oil painting and drawing seem to articulate more about an experience, whereas digital images seem to be more about data. This was a fascinating interaction for me as an oil painter who was now discovering digital media techniques.

Omaar: Some people are better at learning visually, others may prefer reading aloud, yet others may need hands-on experience. How do different tools address different people’s educational needs? Do different modalities work better in different contexts?

Tandori: People learn in different ways and sometimes utilizing more than one way of learning can be highly effective. Although my vision has deteriorated, I am sure I still learn best with the remaining vision that I have, and I don’t know if this is because I grew up learning and practicing art, or that I am simply a visual learner. I know that if I see a word on a printed page it seems to soak into my memory more effectively than if I see it on a computer screen or hear it spoken out loud.

The notion of different modalities and memory retention is also interesting. Engaging a variety of modalities to convey information could be highly effective. So much of our brains are dedicated to visual processing, but if that vision is limited, what happens to all that grey matter? Can it be accessed differently? Recently I had an interview with Neil Sahota of IBM on the AI for Good 2020 Global Summit podcast where we discussed “artistic intelligence” and how it could combine with artificial intelligence to produce innovative ways of accessing data and solving problems. Why can we not all be fabulous at art? Is there something in the creative approach that can be useful as an educational tool and in technology to help us think outside the box (or computer screen)?

Hopefully, utilizing multisensory modes of data delivery, whether for education, research, or engagement with audiences at our Sensory Science exhibitions can promote deeper and more dynamic levels of understanding.

Omaar: Looking to the future, where do you think the greatest need for additional research is, if we are to enhance access and use of new technologies for PWD?

Tandori: Including those with the lived experience of disability would be an amazing place to start to enhance access and use of new technologies for these very same people. To include those with disabilities in discourses around science, technology, art, medicine, and everything else would be empowering and would change the world for the better. We can only understand the needs of those who have different needs if we ask them and engage as equals in developing more accessible, adaptable, user-friendly technologies and environments.

It is possible that if we focus research in this area, we may help to eliminate many of the barriers that PWD face, and if we can do that then we eliminate the word disability. Can you imagine how this might affect the prospects of those who currently face a life of unemployment as a result of their disabilities? How this might empower them to be independent or pursue educational goals and careers they might never have thought attainable? Moreover, we need to make sure that these technologies are accessible to everyone across the globe, irrespective of their socio-economic position, and that these technologies are cost-effective.

Original article 

Unlocking Your Inner Eye. Artistic Intelligence with Erica Tandori, a Legally Blind Artist

Artist in residence, Dr Erica Tandori is expanding the frontiers of Artificial Intelligence (AI) and Art. Her work at Monash University in the Rossjohn lab focuses on communicating science through art for the visually impaired. She is now expanding this work and utilising robotics in her artistic creations to create a multi-sensory experience. AI and robotics have the potential to transform lives and promote social good. Harnessing these technologies to create art exhibitions exploring science and biomedicine is enabling greater inclusion, accessibility and education for low vision, blind and diverse audiences. Erica’s work and her personal story provide an impressive example of AI for social good, promoting diversity and inclusion in science and technology.

Art is not in the retina. It’s in the imagination. Hear the story of Erica Tandori, a visually impaired artist, who is using AI to create multi-sensory art experiences showcasing the wonders of biological life.

TOPICS DISCUSSED IN THIS EPISODE:
– Art is not in the retina, it’s in the imagination
– Natural intelligence
– Tapping into the soul to power AI and art
– Art for good
– Fostering wonderment to think differently

Panelists include Neil Sahota, World Wide Business Development Leader, IBM Watson and Michael Ashley, Screenwriting Professor at Chapman university

Original article