Get a taste for science in National Science Week!
Our $500 000 grant round for 2021 has just been announced with great projects from around the country preparing to celebrate science. Many of the projects took inspiration from the National Science Week school theme of Food – Different by Design, including Food – Now and into the Future which is all about making healthy food choices and will be presented by the Wesley Mission in Logan City just south of Brisbane.
STEAM Ahead – Foodlovers is an exploration of traditional Indigenous food and modern food production techniques at the Western Sydney Parkland. If you’re thirsty for more, four boutique brewers will conjure special brews for ExBEERimental Science in Hobart and share their techniques and tastes with both live and virtual audiences.
And while they may not be delicious, the Donut Shooting Robots in Adelaide will fight it out as 15 teams go head to head in a design – build – program competition.
Minister for Industry, Science and Technology Karen Andrews said the Australian Government was proud to support inspiring, innovative and accessible projects as part of National Science Week.
“Science is everywhere, and National Science Week is for everyone,” Minister Andrews said.
“Even in the midst of last year’s lockdowns, more than one million Australians took part in events across every state and territory. This year, we’re looking to boost those numbers even higher.
“From concerts to VR tours and everything in between, this year’s National Science Week grant recipients have something to offer every Australian.”
The grant recipients are:
My Goodness: Interactive multisensory science books
Read about immune system cells through your sense of touch or learn about food and nutrition through a 3D soundscape. ‘My Goodness’, a Rossjohn Sensory Science Multisensory Science Book, is an exhibition of 10 interactive ‘books’ designed for low-vision, blind, hearing-impaired, deaf, and non-disabled audiences.
The Books explore the relationship between infection, immunity, food, and nutrition. They make science accessible to more people by using large print text, braille, tactile artworks, haptic and 3DAudio, visual tracking and tactile sensor interaction technologies.
National Science Week 2021 will run from 14-22 August. Watch this space for further details.
The immune systems of all vertebrates contain specialized cells, called T cells, that play a fundamental role in protecting against fungal, bacterial, parasitic and viral infections. T cells use ‘molecular sensors’ called T cell receptors (TCRs) on their surface that can detect and eliminate the invading pathogens. For most of the past four decades, it was considered that there were only two T cell lineages, αβ and γδ T cells, characterized by their cell surface expressed αβ and γδ TCRs, respectively.
In a paper published today in Science, an international team of scientists at the University of New Mexico (US), Monash University (Australia), and the US National Institutes of Health, has defined a novel T cell lineage, called γµ T cells, found only in marsupials (e.g. kangaroos and opossums) and monotremes (e.g. duckbill platypus).
Evidence for the γμ TCR came with the discovery of genes encoding the TCRμ protein whilst analyzing the first complete marsupial genome, that of the South American opossum Monodelphis domestica. Oddly, distinct from conventional αβ and γδ TCRs, TCRμ was predicted to share similarity with the antibodies.
Using the Australian Synchrotron, the scientists at Monash University obtained a detailed three-dimensional image of the opossum γµTCR architecture that was unique and distinct from αβ or γδ TCRs. Noteworthy was the presence of an additional single antibody-like segment called Vμ domain with an architecture resembling to nanobodies, a unique type of antibodies. This discovery raises the possibility that γμ T cells recognize pathogens using novel mechanisms, distinct from conventional T cells.
“The discovery of a nanobody like structure in the γμ TCR has the potential to expand the immunology ‘toolbox’. Indeed, nanobodies discovered in the camel family (e.g. alpacas) have recently attracted considerable interests for their development as research and diagnostic tools and more importantly as immunotherapeutics in humans to combat cancer and viral infections such as COVID-19. Marsupials may offer an alternative source of nanobodies, one that is smaller, easier and cheaper to maintain than llamas or alpacas.” said Monash University Dr Marcin Wegrecki from the Biomedicine Discovery Institute, co-first author on the paper.
“Our findings further illustrate the value of exploring the world’s biodiversity for novelty beyond the standard animal research models, such as laboratory mice. Modern genomic tools applied to many species have opened the door to the myriad of immunological solutions to fighting pathogens that evolution has produced.” said Prof Robert Miller from the University of New Mexico, co-lead author on the paper.
“Many in-roads have been made in understanding the immune systems of humans and mice leading to the development of novel immunotherapeutic approaches enabling humans to combat highly pathogenic viruses. However, much less is understood on how immunity operates in other species that, in some cases, have been decimated by wildlife diseases. Ultimately our work may guide the development of veterinary approaches (e.g. novel vaccines) that will contribute to wildlife conservation.” said Dr Jérôme Le Nours from Monash Biomedicine Discovery Institute, co-lead author on the paper.
“This is a prime example of curiosity driven science leading to unexpected and transformative findings.” Le Nours stated.
The research findings were a culmination of a 12-year project that involved a multidisciplinary collaborative effort and the support from the ARC Centre of Excellence in Advanced Molecular Imaging, and funding from the US National Science Foundation, the US National Institutes of Health and the Australian Research Council.
Read the full paper in Science titled: The molecular assembly of the marsupial γμ T cell receptor defines a third T cell lineage.
More Research To Improve Survival Rates For Cancer
The Andrews Labor Government is helping Victoria’s best and brightest researchers discover new breakthroughs in cancer prevention, treatment and care.
Minister for Health Martin Foley today announced the 21 recipients from the Victorian Cancer Agency’s latest grants round, who will share in more than $10 million in research grants to work on ground-breaking discoveries.
Dr Paul Beavis from Peter MacCallum Cancer Center is investigating a new way to make CAR T-Cell therapy, a breakthrough treatment for blood cancer, also work against solid tumours.
Dr Laura Forrest also from the Peter MacCallum Cancer Center is testing a new screening tool designed to identify the best support for people with genetic risk factors for cancer.
The Victorian Cancer Plan 2020-24 sets an ambitious target of saving 10,000 lives from cancer by 2025.
In Victoria, the five most common cancers are prostate, breast, bowel and lung cancer, and melanoma. Participating in cancer screening and finding cancer early, before any symptoms are noticed gives the best chance of survival.
More than 90 per cent of bowel cancers can be successfully treated if found early. All eligible Victorians aged 50-74 should screen every two years for bowel cancer by completing a free, at-home screening test sent in the mail.
Due to early detection and better treatment, more Victorian women are surviving breast cancer, with the five-year survival rate now at 91 per cent compared to 73 per cent in 1986. Eligible women aged 50-74 are invited to screen for breast cancer every two years.
Cervical cancer is one of the most preventable cancers with regular cervical screening tests for women aged 25-74 and the HPV vaccination.
This funding takes the total investment by the Victorian Cancer Agency to more than $250 million since it was established by the Victorian Government in 2006.
As part of its dedication to cancer research, the Labor Government allocated a further $2.447 million in the 2020/21 State Budget to increase access to clinical trials and teletrials for regional patients.
Mid-Career Research Fellowship (Biomedical Stream)
Dr Julian Vivian – Monash University
Improving Bone Marrow Transplantation Treatment of Leukaemias by Donor/Recipient ‘Mismatching’
Killer-cell receptors are central to immune surveillance, controlling both T cells and Natural Killer cells. Currently, exploiting Killer-cell receptors in the clinic is hampered by our lack of understanding of the extreme diversity of receptor and ligand pairings. I have recently provided a framework to decipher this receptor/ligand code and will apply this to bone marrow transplantation for the treatment of leukaemia. These studies will underpin the development of new strategies for donor/recipient matching and for the prophylaxis and treatment of cytomegalovirus reactivation in transplantation.
All grant recipients
The Center for Data innovation spoke with Erica Tandori, an artist in residence at Monash University in Australia, who has low vision and is using AI to create multi-sensory art experiences that showcase the wonders of biological life. Tandori discussed how data-driven technologies are helping her create art exhibitions that explore science and biomedicine, enabling greater inclusion, accessibility, and education for low vision, blind, and diverse audiences.
Hodan Omaar: What challenges do people with low vision or blindness face in accessing science, and how are you using artificial intelligence to better communicate scientific ideas?
Erica Tandori: People with low vision or blindness face the challenges of comprehending a predominantly vision-driven world. In science and biomedicine, reliance on data derived from particle accelerators, X-ray crystallography, and microscopes is critically dependent on the ability to see. But this dependence on vision may be unnecessarily limiting. For instance, the sonification of data, which uses non-speech audio to convey information or perceptualize data, may be an opportunity for those with low vision or blindness to access meaningful information through hearing, and has also been useful for those without impairments.
A multisensory approach to scientific information through art presents new opportunities and possibilities for data representation, even in fields such as astronomy. A recent example of incorporating data in a multisensory way I’ve worked on is the HIV Capsid Data Projection Project, a work created with interaction designer and video artist Stu Favilla. This 5-foot interactive sculpture (shown on the left below) is made up of individual hexagonal and pentagonal tiles smothered in tiny foam balls, simulating the structure and surface of the viral HIV protein. We projected computer-generated molecular structures onto the sculpture as a way to present and express information about the virus itself. To incorporate the fact that HIV virus evolves rapidly, we mutated the HIV RNA protein set (the compound that carries genetic code) based on the Markov chain model, a stochastic model that describes a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. We then color-coded the HIV RNA protein set for projection. I was inspired by the classic Horton-Conway game Life, a simulation where cells form various patterns throughout the course of the game based on the initial conditions of the cells, and used a two-dimensional array of artificial mutating RNA to seed an algorithmic life pattern into the piece. This was intended to represent a flow of synonymous gene mutations and the viral transmission from one host to the next. Finally, we set the stage for these rhythms of light and dynamic mutations to a pulsating dance beat, as though this giant viral capsid was taking to a dance floor, for an online exhibition. We took a little poetic licence with the music given we were holed up on strict lockdowns due to COVID-19. For future exhibitions we intend to align data sonification of the viral mutations with the visual data projections on the surface of the sculpture.
The Sleeping Pangolin sculpture is another work we are incorporating AI and data into, with screens that show mutating viruses created by Stu Favilla. Here, the SARS-CoV-2 RNA is producing color-coded renderings atop of fractal video shaders (visualization is shown above on the right). The work is high-resolution, creating multiple displays from an initial 6K video rendering. This way, a single video rendering is distributed to a series of sculptures including a pangolin (shown on the right below), bat, and civet cat. We are exploring the use of 3D sound sonification in this work also.
If we can create works that explore data projection in new ways free from the screen, we may also be able to heighten accessibility to data, creating new understandings and new meaning.
I have marvelled at the way making science accessible to those with low vision and blindness has also made science more accessible to everyone, no matter what their age or scientific literacy. Imagining a world in which there are no obstacles to information and knowledge access could significantly improve people’s ability to pursue careers that they might have thought impossible for them. If we can represent information and data in new ways, we may be able to reconsider notions of disability altogether. Inclusion and diversity in our workplace and in the scientific arena creates both equal opportunity and innovation. This has been a driving idea behind our Monash Sensory Science initiative, an exhibition program specifically designed for those with low vision, blindness, and other disabilities. Professor Jamie Rossjohn of the Rossjohn Laboratory where I work as artist in residence, was well aware of the limitations for those with low vision and blindness, and the lack of university science research outreach programs catering to this demographic.
Our exhibitions have been travelling across Australia since 2018 and we have engaged audiences of all ages and scientific backgrounds in the hopes of creating community awareness about biomedical research, and fostering the idea in young students that they too can pursue careers in science no matter what their perceived limitations might be. We have created these exhibitions to enable people to access information, ideas, and research around biomedicine in multisensory ways that include visual, tactile, and audio approaches. In our technology driven world, so much is screen-based. This presents both a problem and an opportunity for those with low vision or blindness. Vision-based data is devoid of any other sensory experience apart from sound, as screens are not very interesting to touch, hold, or interact with. On the other hand, screens with zoom functionality enable those with low vision to see data at greater magnification, as well as access text-to-speech functions. If we can release data from this screen-bound environment, utilizing AI and multimodal delivery, we may be able to further eliminate obstacles to data access in myriad ways with enormous implications for diverse needs.
Omaar: Many applications intended to assist people who experience blindness or low vision are not trained on data created by people with these disabilities. For instance, some algorithms that assist people with disabilities (PWD) to identify objects in photographs are trained only on photographs taken by people without disabilities. Why is it important for these types of AI systems to be trained on pictures taken by PWD?
Tandori: This is an excellent question and one that touches on so many issues around low vision, blindness, and the lived experience of disability. This is really a question about how much we allow those with disabilities to speak about their lived experience, their needs, and their self determination.
My PhD focused on the lived experience of how blindness looks. It was, as far as possible, an eyewitness account of vision loss and how the world appears from my perspective. How many of those in the medical profession know what vision loss or blindness actually looks like? Far from the blackness we might imagine, macular degeneration and other conditions of the retina can cause an absence of vision, flashing lights, metamorphopsia (a perceptual distortion that causes linear objects, such as lines on a grid, to look curvy or rounded), or a myriad of entoptic visual artifacts caused by the interaction between disease, the environment, and the brain. In medical texts, journals, and the wider media, my type of vision loss is portrayed as a static black spot at the centre of a perfectly defined visual field. But my vision loss is dynamic and ever-changing. At times, the visual field resembles a cross between a softly focused, light-filled Monet painting and the writhing, explosive brushstrokes of a van Gogh. The centre of the field in the scotoma (a partial loss of vision or blind spot in an otherwise normal visual field) is an experience of absence, where cortical completion fills the area of nothingness with surrounding color (yes, color!) and pattern.
In my piece Invisible Mona Lisa (shown on the left below), you can see the processes of cortical completion taking place—the area of the face is concealed by the scotoma , and the area where her head should be is filled with background color. My brain is working hard to make sense of the blind spot, actively completing the picture for me. This is largely how I see others, their faces obscured as though partially camouflaged.
Medical technologies have been unable to determine how entoptic symptoms of macular disease appear. It is by listening to the voices of those with lived experience of disability that we can come to better understand their perspective. If those voices are included in all aspects of society, from the medical, technological, and cultural aspects, we may find more effective solutions and innovative approaches that create suitable environments to minimize the impact of disability. Weaving the voices and lived experience of those with disability into the loop of AI systems would be hugely beneficial.
As an aside, some of the artworks I have created have completely confounded object recognition software, which has resulted in some hilarious outcomes, and may have implications in regard to AI, accessibility, and art for people who are blind. AI completely misrecognized my Sleeping Pangolin sculpture (shown above on the right), for example.
Omaar: Throughout your artistic and PhD career, you have explored ideas surrounding the dualism of vision: what it means to see as opposed to what it means to have vision. Is it fair to say that the work you do is augmenting people’s vision, i.e. using digital tools to help people imagine things in their mind’s eye?
Tandori: I have used digital tools to help people imagine how things look from my perspective and tried to give an eyewitness account of my own macular dystrophy as accurately as I possibly can. However, there is a conundrum in this that is completely unresolvable.
If I look at the artwork I create, I am always looking at it through the lens of a diseased retina. It is always going to look like an accurate portrayal of vision loss to me because I am seeing it through my own vision loss. The only way to really know if I am accurate is if I take out my eyes and put in a healthy pair, which we know is completely impossible. I have to work around this conundrum, utilizing my peripheral vision and zoom technologies on my computer screen to see as much as possible of what I have created.
There are many dualities in this task, from exploring the loss of vision through the visual language of art, the inescapability of creating works about vision loss through the prism of vision loss, observing my own observing and my diminished ability to observe, and the duality of creating images that look at “normal” vision and comparing it to “diseased” vision. Moreover, we have the dichotomy of seeing and vision, of the mind’s eye and the retina.
During my PhD (see some of the artworks here), I tried to create as accurately as possible, a vision of my vision loss. Using a digital camera, I took images that might convey how something looks to those with “normal” vision. Sometimes the camera became the “perfect eye,” able to catch the scenes that I was unable to see by myself. I would then take these digital images and augment them using Photoshop and After Effects, to simulate my eye disease. In this way, I was trying to explore the gap between seeing and non-seeing. To my surprise, this could not be done with one single method. Digital tools could not express all the complexity of deteriorating vision. I needed to employ traditional methods of art-making in combination with digital technologies to fully articulate the range of symptoms. Moving back and forth between traditional and digital methods created a dialogue between analogue and digital ways of understanding the phenomena of vision loss. Oil paint, canvas, paper, and pencil create texture and visceral qualities which are absent in the digital image. Oil painting and drawing seem to articulate more about an experience, whereas digital images seem to be more about data. This was a fascinating interaction for me as an oil painter who was now discovering digital media techniques.
Omaar: Some people are better at learning visually, others may prefer reading aloud, yet others may need hands-on experience. How do different tools address different people’s educational needs? Do different modalities work better in different contexts?
Tandori: People learn in different ways and sometimes utilizing more than one way of learning can be highly effective. Although my vision has deteriorated, I am sure I still learn best with the remaining vision that I have, and I don’t know if this is because I grew up learning and practicing art, or that I am simply a visual learner. I know that if I see a word on a printed page it seems to soak into my memory more effectively than if I see it on a computer screen or hear it spoken out loud.
The notion of different modalities and memory retention is also interesting. Engaging a variety of modalities to convey information could be highly effective. So much of our brains are dedicated to visual processing, but if that vision is limited, what happens to all that grey matter? Can it be accessed differently? Recently I had an interview with Neil Sahota of IBM on the AI for Good 2020 Global Summit podcast where we discussed “artistic intelligence” and how it could combine with artificial intelligence to produce innovative ways of accessing data and solving problems. Why can we not all be fabulous at art? Is there something in the creative approach that can be useful as an educational tool and in technology to help us think outside the box (or computer screen)?
Hopefully, utilizing multisensory modes of data delivery, whether for education, research, or engagement with audiences at our Sensory Science exhibitions can promote deeper and more dynamic levels of understanding.
Omaar: Looking to the future, where do you think the greatest need for additional research is, if we are to enhance access and use of new technologies for PWD?
Tandori: Including those with the lived experience of disability would be an amazing place to start to enhance access and use of new technologies for these very same people. To include those with disabilities in discourses around science, technology, art, medicine, and everything else would be empowering and would change the world for the better. We can only understand the needs of those who have different needs if we ask them and engage as equals in developing more accessible, adaptable, user-friendly technologies and environments.
It is possible that if we focus research in this area, we may help to eliminate many of the barriers that PWD face, and if we can do that then we eliminate the word disability. Can you imagine how this might affect the prospects of those who currently face a life of unemployment as a result of their disabilities? How this might empower them to be independent or pursue educational goals and careers they might never have thought attainable? Moreover, we need to make sure that these technologies are accessible to everyone across the globe, irrespective of their socio-economic position, and that these technologies are cost-effective.