Security Council adopts key resolution on Gaza crisis https://news.un.org/en/story/2023/12/1145022
The Council demanded that the parties “allow, facilitate and enable” the immediate, safe and unhindered delivery of humanitarian assistance at scale directly to the Palestinian civilian population throughout the Gaza Strip. It also requested the UN Secretary-General to appoint a Senior Humanitarian and Reconstruction Coordinator with responsibility for “facilitating, coordinating, monitoring, and verifying” in Gaza, as appropriate, the humanitarian nature of all relief consignments to the enclave provided through States that are not party to the conflict.
https://www.youtube.com/watch?v=fZmSVxTp0Rk The U.N. Security Council adopted a watered-down resolution Friday calling for immediately speeding aid deliveries to hungry and desperate civilians in Gaza but without the original plea for an “urgent suspension of hostilities” between Israel and Hamas.
The long-delayed vote in the 15-member council was 13-0 with the United States and Russia abstaining. The U.S. abstention avoided a third American veto of a Gaza resolution following Hamas’ surprise Oct. 7 attacks inside Israel. Russia wanted the stronger language restored; the U.S. did not.
https://twitter.com/tparsi/status/1738259729443385434 Biden managed to delete the call for suspension of hostilities as well as the establishment of a robust UN inspection mechanism in Gaza.
RBI Report में क्या हैं चौंकाने वाले आंकड़े? MoneyCentral https://youtu.be/ZdZ0Pu_dZXs?start=123&end=403 21st Dec, 2023 What is the compulsion to import pulses? Mainly due to Climate Change Sowing for Rabi season affected..
Semantic Decoder: Meta Just Achieved Mind-Reading Using AI https://www.youtube.com/watch?v=uiGl6oF5-cE ColdFusion Semantic reconstruction of continuous language from non-invasive brain recordings
https://www.nature.com/articles/s41593-023-01304-9 Jerry Tang, Amanda LeBel, Shailee Jain & Alexander G. Huth
Nature Neuroscience Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks.
Toward a real-time decoding of images from brain activity October 18, 2023 https://ai.meta.com/blog/brain-ai-image-decoding-meg-magnetoencephalography/ Using magnetoencephalography (MEG), a non-invasive neuroimaging technique in which thousands of brain activity measurements are taken per second, we showcase an AI system capable of decoding the unfolding of visual representations in the brain with an unprecedented temporal resolution.
https://www.theguardian.com/technology/2023/may/01/ai-makes-non-invasive-mind-reading-possible-by-turning-thoughts-into-text The achievement overcomes a fundamental limitation of fMRI which is that while the technique can map brain activity to a specific location with incredibly high resolution, there is an inherent time lag, which makes tracking activity in real-time impossible
The lag exists because fMRI scans measure the blood flow response to brain activity, which peaks and returns to baseline over about 10 seconds, meaning even the most powerful scanner cannot improve on this. “It’s this noisy, sluggish proxy for neural activity,” said Huth.
However, the advent of large language models – the kind of AI underpinning OpenAI’s ChatGPT – provided a new way in. These models are able to represent, in numbers, the semantic meaning of speech, allowing the scientists to look at which patterns of neuronal activity corresponded to strings of words with a particular meaning rather than attempting to read out activity word by word.
Using AI to decode speech from brain activity
Studying the brain to build AI that processes language as people do
This AI system can be deployed in real time to reconstruct, from brain activity, the images perceived and processed by the brain at each instant. This opens up an important avenue to help the scientific community understand how images are represented in the brain, and then used as foundations of human intelligence. Longer term, it may also provide a stepping stone toward non-invasive brain-computer interfaces in a clinical setting that could help people who, after suffering a brain lesion, have lost their ability to speak.