Malaysia secures $2 billion in tech and R&D investment from Japan
Identifying the precise type of cancer and primary location of a cancer patient is the first step in selecting an effective treatment. However, even with rigorous testing, the origin of the cancer cannot be determined in rare cases. Although these tumors from uncertain sources tend to be aggressive, oncologists are required to treat them with off-target drugs, which generally results in high toxicity and low survival rates.
With this, researchers from the Koch Institute for Integrative Cancer Research at the Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital (MGH) have created a new deep learning approach that can help categorize tumors of unknown origin into examining gene expression programs related to early cell development and differentiation.
Developing a diagnostic tool from a machine learning model that exploits the variations between healthy and normal cells and between different types of cancer requires a delicate balancing act.
If a model is very sophisticated and accounts for an excessive number of aspects of cancer gene expression, it may appear to learn the training data perfectly, but fail when presented with new data. However, by reducing the number of features to simplify the model, the model may not capture the kinds of information that would lead to correct classifications of cancer types.
To strike a balance between reducing the number of features and selecting the most relevant data, the scientists focused the model on cancer cell markers of disrupted developmental pathways. As an embryo develops and undifferentiated cells specialize in various organs, a plethora of pathways govern cell division, growth, shape change, and migration.
As the tumor grows, cancer cells lose several specialized characteristics of mature cells. Also, as they gain the ability to multiply, change, and metastasize to other tissues, they begin to resemble embryonic cells in some ways. In cancer cells, many gene expression pathways that govern embryogenesis are reactivated or deregulated.
The researchers took the gene expression of tumor samples from the Cancer Genome Atlas (TCGA) and broke it down into distinct parts that each correspond to a certain stage in a tumor’s development. They then gave each of these parts a mathematical value; and turned it into a machine learning model tag called the Developmental Multilayer Perceptron (D-MLP), which gives a tumor a score based on how it grew, then predicts where from she comes.
Meanwhile, when DALL-E was released, everyone on the internet felt good. DALL-E is an artificial intelligence based image generator that was inspired by artist Salvador Dali and the cute robot WALL-E.
It uses natural language to create any mysterious and beautiful image your heart desires. When people typed in things like “smiling gopher holding an ice cream cone,” they saw them come to life right away.
To create an image, DALL-E 2 uses what is called a “diffusion model”, which tries to fit all the text into a single description. But when there’s a lot more detail in the text, it’s hard for a single description to cover it all.
Even though they are very adaptable, they sometimes have trouble understanding how certain ideas fit together. To create more complex and easier to understand images, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) modified the configuration of the typical model.
“Magic” patterns that are used to make images work by suggesting a series of steps that can be repeated over and over to arrive at the desired image. It starts with a “bad” photo and makes it better and better until it’s the chosen one.
By stitching together multiple designs, they can refine the look together at every stage, creating an image that has all the characteristics of each design. By having multiple models work together, users can choose from many more creative image combinations.