If you are curious about AI but don't have a fancy PC or graphics card, I forked and modified this repo to run a small open source #LLM from #Mozilla. It uses a bunch of great libraries and the CPU for #inference.
All you need is 8 GB ram, tested on #ubuntu. Your mileage may vary on other OSs.
#Nvidia's new #AI #chip claims it will drop the costs of running #LLMs
“You can take pretty much any #LargeLanguageModel you want and put it in this and it will inference like crazy.
The #inference cost of large language models will drop significantly.”
Postdoc position for ecologists: work in Cornell U in Ithaca and at CEES Biology Department in Oslo Norway with Citizen Science data on birds on estimating species distribution, phenology and abundance. Deadline 28 Feb 2023
Please boost
Parallel generation from auto regressive LMs
Para ll el!
Well not exactly, use a fast LM to propose the next words first
https://arxiv.org/abs/2302.01318
#NLProc #Generation #inference
#DeepMind
"Simulation-based inference for efficient identification of generative models in connectomics"
https://www.biorxiv.org/content/10.1101/2023.01.31.526269v1
"Meta sues 'scraping-for-hire' service that sells user data to law enforcement: Israeli firm says it uses AI to analyze 'billions of ‘human pixels’ and signals.'” https://arstechnica.com/information-technology/2023/01/meta-sues-scraping-for-hire-service-that-sells-user-data-to-law-enforcement/ #AI #Israel #Voyager #Facebook #Istagram #Twitter #socialmedia #privacy #inference #surveillance
What we experience in the current moment tells us about *now*-– but what does it tell us about the past or future? And does the current moment tell us *more* about the past or about the future?
Historically, the statistical learning literature has tended to study these sorts of questions using highly simplified lab-created sequences (e.g., Markov processes). Statistically, these sequences are temporally symmetric. Behaviorally, people are just as good at predicting unknown past and future states, given observations in the present.
But in our own lives, we have memories of the past but not the future, imposing an "arrow of time" on our subjective experiences known as the "psychological arrow of time." This means we know more about our own pasts than our own futures. (We often take this for granted, even though most laws of physics are temporally symmetric!)
We (@xxming, Ziyan Zhu, and I) were curious: in *other* people's lives, where the past and future are equally unknown (and unremembered), are our inferences symmetric (like in typical statistical learning studies) or asymmetric (like for our own lives)?
We ran a study to test this, and we found something kind of neat: it turns out the psychological arrow of time is communicable to other people through conversation! Essentially, what people say is influenced by what they know. And since each person knows more about their own past, this asymmetry is picked up by other people.
We think there are all sorts of interesting implications here about how we communicate our own biases and knowledge asymmetries to other people. @xxming also has some really mind-blowing ideas about how an *asymmetric* law of physics (the second law of thermodynamics) might help explain the psychological arrow of time and some other fundamental properties of memory. (We're planning to write up an opinion paper about these ideas later.)
We hope you'll check out our preprint, send along some thoughts, questions, constructive criticisms, etc.!
#preprint: https://psyarxiv.com/yp2qu/
#code and #data: https://github.com/ContextLab/prediction-retrodiction-paper
I often refer to W.S. #McCulloch as the Grandfather of #AI — to me that makes C.S. #Peirce its Great Grandfather. Many of my own explorations begin with Peirce, just for starters his graph-theoretic and triadic relational spins on #Inference, #Information, and #Inquiry. But I always find, if I apply his way of working to the state of his work as he left it — recursively as it were — it leads on to new adventures.
But I've got a string of lights to debug — more in the New Year …
@pseudacris, Gene Hunt & I wrote about cross-disciplinary insights in @TrendsEcolEvo https://doi.org/10.1016/j.tree.2022.10.013 we hope many will read and discuss with their colleagues/labs. Often we disregard important information/insights from other fields while attempting to make inferences in our own (related) field. We short-change ourselves when we do that -- we can do better when reaching out across fields.
Pls. boost!
#macroevolution #phylogenetics #fossils #inference #paleobiology #ecology
Are p-values convoluted and arcane? Are confidence intervals hopelessly confusing? No! These ideas can be challenging to teach and learn, but they represent an invaluable way of thinking about scientific results. Once they're properly understood, they are more intuitive than they get credit for. Here is an attempt at a very brief explanation of why I love the logic of null hypothesis significance testing. [1/7]
#Statistics #Frequentist #NHST #Inference