lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

59
active users

#inference

0 posts0 participants0 posts today
Andrzej Wąsowski ☑️ 🟥Self promotion
☮ ♥ ♬ 🧑‍💻<p>Day 19 cont 🙏⛪️🕍🕌⛩️🛕 💽🧑‍💻</p><p>“The <a href="https://ioc.exchange/tags/LiberalParty" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LiberalParty</span></a> has accidentally left part of its email provider’s <a href="https://ioc.exchange/tags/subscriber" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>subscriber</span></a> details exposed, revealing the types of <a href="https://ioc.exchange/tags/data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>data</span></a> harvested by the party during the <a href="https://ioc.exchange/tags/election" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>election</span></a> campaign.</p><p>This gives rare <a href="https://ioc.exchange/tags/insight" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>insight</span></a> into some of the specific kinds of data the party is keeping on voters, including whether they are “predicted Chinese”, “predicted Jewish”, a “strong Liberal” and other <a href="https://ioc.exchange/tags/PersonalInformation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PersonalInformation</span></a>.”</p><p><a href="https://ioc.exchange/tags/AusPol" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AusPol</span></a> / <a href="https://ioc.exchange/tags/DataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataScience</span></a> / <a href="https://ioc.exchange/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> / <a href="https://ioc.exchange/tags/voters" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>voters</span></a> / <a href="https://ioc.exchange/tags/Liberal" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Liberal</span></a> / <a href="https://ioc.exchange/tags/LNP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LNP</span></a> / <a href="https://ioc.exchange/tags/Nationals" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nationals</span></a> &lt;<a href="https://www.crikey.com.au/2025/04/17/victorian-liberals-data-exposed-email-mailchimp-federal-election-crikey/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">crikey.com.au/2025/04/17/victo</span><span class="invisible">rian-liberals-data-exposed-email-mailchimp-federal-election-crikey/</span></a>&gt;</p>
Eric Maugendre<p>"In real life, we weigh the anticipated consequences of the decisions that we are about to make. That approach is much more rational than limiting the percentage of making the error of one kind in an artificial (null hypothesis) setting or using a measure of evidence for each model as the weight."<br>Longford (2005) <a href="http://www.stat.columbia.edu/~gelman/stuff_for_blog/longford.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">http://www.</span><span class="ellipsis">stat.columbia.edu/~gelman/stuf</span><span class="invisible">f_for_blog/longford.pdf</span></a></p><p><a href="https://hachyderm.io/tags/modeling" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>modeling</span></a> <a href="https://hachyderm.io/tags/nullHypothesis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nullHypothesis</span></a> <a href="https://hachyderm.io/tags/probability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>probability</span></a> <a href="https://hachyderm.io/tags/probabilities" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>probabilities</span></a> <a href="https://hachyderm.io/tags/pValues" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pValues</span></a> <a href="https://hachyderm.io/tags/statistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>statistics</span></a> <a href="https://hachyderm.io/tags/stats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>stats</span></a> <a href="https://hachyderm.io/tags/statisticalLiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>statisticalLiteracy</span></a> <a href="https://hachyderm.io/tags/bias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bias</span></a> <a href="https://hachyderm.io/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> <a href="https://hachyderm.io/tags/modelling" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>modelling</span></a> <a href="https://hachyderm.io/tags/regression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>regression</span></a> <a href="https://hachyderm.io/tags/linearRegression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linearRegression</span></a></p>
Eric Maugendre<p>Feature Selection in Python; a script ready to use: <a href="https://johfischer.com/2021/08/06/correlation-based-feature-selection-in-python-from-scratch/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">johfischer.com/2021/08/06/corr</span><span class="invisible">elation-based-feature-selection-in-python-from-scratch/</span></a></p><p><a href="https://hachyderm.io/tags/interpretability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretability</span></a> <a href="https://hachyderm.io/tags/featureSelection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>featureSelection</span></a> <a href="https://hachyderm.io/tags/python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>python</span></a> <a href="https://hachyderm.io/tags/probability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>probability</span></a> <a href="https://hachyderm.io/tags/probabilities" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>probabilities</span></a> <a href="https://hachyderm.io/tags/bigData" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bigData</span></a> <a href="https://hachyderm.io/tags/classification" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>classification</span></a> <a href="https://hachyderm.io/tags/linearRegression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linearRegression</span></a> <a href="https://hachyderm.io/tags/regression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>regression</span></a> <a href="https://hachyderm.io/tags/Schusterbauer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Schusterbauer</span></a> <a href="https://hachyderm.io/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> <a href="https://hachyderm.io/tags/AIDev" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIDev</span></a></p>
Judith van Stegeren<p>Many companies are currently scrambling for ML infra engineers. They need people that know how to manage AI infrastructure, and that can seriously speed up training and inference with specialized tooling like vLLM, Triton, TensorRT, Torchtune, etc.</p><p><a href="https://fosstodon.org/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> <a href="https://fosstodon.org/tags/training" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>training</span></a> <a href="https://fosstodon.org/tags/genai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>genai</span></a> <a href="https://fosstodon.org/tags/triton" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>triton</span></a> <a href="https://fosstodon.org/tags/vllm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vllm</span></a> <a href="https://fosstodon.org/tags/pytorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pytorch</span></a> <a href="https://fosstodon.org/tags/torchtune" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>torchtune</span></a> <a href="https://fosstodon.org/tags/tensorrt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensorrt</span></a> <a href="https://fosstodon.org/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a></p>
Computo<p>The summer is coming to an end... let's see what publications it brought to Computo!</p><p>First, "AdaptiveConformal: An R package for adaptive conformal inference" by Herbert Susmann, Antoine Chambaz and Julie Josse is available with ­(you guessed it) R code at doi.org/10.57750/edan-5f53</p><p>The authors put together a detailed review of 5 algorithms for adaptive conformal inference (used to provide prediction intervals for sequentially observed data), complete with theoretical guarantees and experimental results both in simulations and on a real case study of producing prediction intervals for influenza incidence in the United States.</p><p>The paper highlights the importance of properly chosing tuning parameters to obtain good utility and of having access to good point predictions.</p><p>As the title implies, the paper comes with an R package, AdaptiveConformal, available from GithHub at <a href="https://github.com/herbps10/AdaptiveConformal" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/herbps10/AdaptiveCo</span><span class="invisible">nformal</span></a>. It provides implementations for the 5 algorithms, as well as tools for visualization and summarization of prediction intervals.</p><p><a href="https://mathstodon.xyz/tags/reproducibility" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reproducibility</span></a> <a href="https://mathstodon.xyz/tags/openScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openScience</span></a> <a href="https://mathstodon.xyz/tags/openAccess" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openAccess</span></a> <a href="https://mathstodon.xyz/tags/openSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openSource</span></a> <a href="https://mathstodon.xyz/tags/rStats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rStats</span></a> <a href="https://mathstodon.xyz/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a></p>
Harald Sack<p>One benefit of RDF(S) in comparison with traditional data schema lies in its ability to allow logical inference of new knowledge. Admittedly, RDFS doesn't allow for much semantic expressivity, but we have class and property hierarchies as well as domain and range restrictions, which enable us to entail new RDF triples.</p><p><a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ISE2024</span></a> lecture 07, slides: <a href="https://drive.google.com/file/d/1gJ3RD3Sz2JuEpCF-mBBfA85boYP_jrON/view?usp=drive_link" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">drive.google.com/file/d/1gJ3RD</span><span class="invisible">3Sz2JuEpCF-mBBfA85boYP_jrON/view?usp=drive_link</span></a></p><p><a href="https://sigmoid.social/tags/RDF" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RDF</span></a> <a href="https://sigmoid.social/tags/knowledgegraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>knowledgegraphs</span></a> <a href="https://sigmoid.social/tags/semanticweb" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>semanticweb</span></a> <a href="https://sigmoid.social/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> <a href="https://sigmoid.social/tags/symbolicAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>symbolicAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/AIart" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIart</span></a> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@shufan" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>shufan</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fizise</span></a></span></p>
Eric Maugendre<p>"Extract Year from a datetime column", by Piyush Raj: <a href="https://datascienceparichay.com/article/pandas-extract-year-from-datetime-column/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">datascienceparichay.com/articl</span><span class="invisible">e/pandas-extract-year-from-datetime-column/</span></a></p><p><a href="https://hachyderm.io/tags/dataDev" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dataDev</span></a> <a href="https://hachyderm.io/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> <a href="https://hachyderm.io/tags/Pandas" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Pandas</span></a> <a href="https://hachyderm.io/tags/timeSeries" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>timeSeries</span></a> <a href="https://hachyderm.io/tags/data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>data</span></a> <a href="https://hachyderm.io/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> <a href="https://hachyderm.io/tags/dataAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dataAnalysis</span></a> <a href="https://hachyderm.io/tags/statistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>statistics</span></a> <a href="https://hachyderm.io/tags/stats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>stats</span></a></p>
Eric Maugendre<p>An easy guide to predict possible future quantities, by Mercy Kibet: <a href="https://www.influxdata.com/blog/guide-regression-analysis-time-series-data/#heading0" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">influxdata.com/blog/guide-regr</span><span class="invisible">ession-analysis-time-series-data/#heading0</span></a></p><p><a href="https://hachyderm.io/tags/timeSeries" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>timeSeries</span></a> <a href="https://hachyderm.io/tags/data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>data</span></a> <a href="https://hachyderm.io/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> <a href="https://hachyderm.io/tags/linearRegression" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linearRegression</span></a> <a href="https://hachyderm.io/tags/dataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dataScience</span></a> <a href="https://hachyderm.io/tags/futures" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>futures</span></a> <a href="https://hachyderm.io/tags/money" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>money</span></a> <a href="https://hachyderm.io/tags/trends" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>trends</span></a> <a href="https://hachyderm.io/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a></p>
Boiling Steam<p>PowerInfer: Fast Large Language Model Serving with a Consumer-Grade GPU [pdf]: <a href="https://ipads.se.sjtu.edu.cn/_media/publications/powerinfer-20231219.pdf" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ipads.se.sjtu.edu.cn/_media/pu</span><span class="invisible">blications/powerinfer-20231219.pdf</span></a> <a href="https://mastodon.cloud/tags/linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linux</span></a> <a href="https://mastodon.cloud/tags/update" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>update</span></a> <a href="https://mastodon.cloud/tags/foss" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>foss</span></a> <a href="https://mastodon.cloud/tags/release" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>release</span></a> <a href="https://mastodon.cloud/tags/powerinfer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>powerinfer</span></a> <a href="https://mastodon.cloud/tags/faster" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>faster</span></a> <a href="https://mastodon.cloud/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> <a href="https://mastodon.cloud/tags/cpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cpu</span></a> <a href="https://mastodon.cloud/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> <a href="https://mastodon.cloud/tags/hardware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hardware</span></a> <a href="https://mastodon.cloud/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a></p>

If you are curious about AI but don't have a fancy PC or graphics card, I forked and modified this repo to run a small open source #LLM from #Mozilla. It uses a bunch of great libraries and the CPU for #inference.

All you need is 8 GB ram, tested on #ubuntu. Your mileage may vary on other OSs.

#AI #selfhosted #chatbot

github.com/pingud98/mpt-7B-inf

GitHubGitHub - pingud98/mpt-7B-inference: Run inference on MPT-7B using CPURun inference on MPT-7B using CPU. Contribute to pingud98/mpt-7B-inference development by creating an account on GitHub.

#Arxivfeed :

"Simulation-based inference for efficient identification of generative models in connectomics"
biorxiv.org/content/10.1101/20

bioRxivSimulation-based inference for efficient identification of generative models in connectomicsRecent advances in connectomics research enable the acquisition of increasing amounts of data about the connectivity patterns of neurons. How can we use this wealth of data to efficiently derive and test hypotheses about the principles underlying these patterns? A common approach is to simulate neural networks using a hypothesized wiring rule in a generative model and to compare the resulting synthetic data with empirical data. However, most wiring rules have at least some free parameters and identifying parameters that reproduce empirical data can be challenging as it often requires manual parameter tuning. Here, we propose to use simulation-based Bayesian inference (SBI) to address this challenge. Rather than optimizing a single rule to fit the empirical data, SBI considers many parametrizations of a wiring rule and performs Bayesian inference to identify the parameters that are compatible with the data. It uses simulated data from multiple candidate wiring rules and relies on machine learning methods to estimate a probability distribution (the `posterior distribution over rule parameters conditioned on the data') that characterizes all data-compatible rules. We demonstrate how to apply SBI in connectomics by inferring the parameters of wiring rules in an in silico model of the rat barrel cortex, given in vivo connectivity measurements. SBI identifies a wide range of wiring rule parameters that reproduce the measurements. We show how access to the posterior distribution over all data-compatible parameters allows us to analyze their relationship, revealing biologically plausible parameter interactions and enabling experimentally testable predictions. We further show how SBI can be applied to wiring rules at different spatial scales to quantitatively rule out invalid wiring hypotheses. Our approach is applicable to a wide range of generative models used in connectomics, providing a quantitative and efficient way to constrain model parameters with empirical connectivity data. ### Competing Interest Statement The authors have declared no competing interest.

What we experience in the current moment tells us about *now*-– but what does it tell us about the past or future? And does the current moment tell us *more* about the past or about the future?

Historically, the statistical learning literature has tended to study these sorts of questions using highly simplified lab-created sequences (e.g., Markov processes). Statistically, these sequences are temporally symmetric. Behaviorally, people are just as good at predicting unknown past and future states, given observations in the present.

But in our own lives, we have memories of the past but not the future, imposing an "arrow of time" on our subjective experiences known as the "psychological arrow of time." This means we know more about our own pasts than our own futures. (We often take this for granted, even though most laws of physics are temporally symmetric!)

We (@xxming, Ziyan Zhu, and I) were curious: in *other* people's lives, where the past and future are equally unknown (and unremembered), are our inferences symmetric (like in typical statistical learning studies) or asymmetric (like for our own lives)?

We ran a study to test this, and we found something kind of neat: it turns out the psychological arrow of time is communicable to other people through conversation! Essentially, what people say is influenced by what they know. And since each person knows more about their own past, this asymmetry is picked up by other people.

We think there are all sorts of interesting implications here about how we communicate our own biases and knowledge asymmetries to other people. @xxming also has some really mind-blowing ideas about how an *asymmetric* law of physics (the second law of thermodynamics) might help explain the psychological arrow of time and some other fundamental properties of memory. (We're planning to write up an opinion paper about these ideas later.)

We hope you'll check out our preprint, send along some thoughts, questions, constructive criticisms, etc.!

#preprint: psyarxiv.com/yp2qu/
#code and #data: github.com/ContextLab/predicti

Replied in thread

@NicoleCRust

I often refer to W.S. #McCulloch as the Grandfather of #AI — to me that makes C.S. #Peirce its Great Grandfather. Many of my own explorations begin with Peirce, just for starters his graph-theoretic and triadic relational spins on #Inference, #Information, and #Inquiry. But I always find, if I apply his way of working to the state of his work as he left it — recursively as it were — it leads on to new adventures.

But I've got a string of lights to debug — more in the New Year …

@pseudacris, Gene Hunt & I wrote about cross-disciplinary insights in @TrendsEcolEvo doi.org/10.1016/j.tree.2022.10 we hope many will read and discuss with their colleagues/labs. Often we disregard important information/insights from other fields while attempting to make inferences in our own (related) field. We short-change ourselves when we do that -- we can do better when reaching out across fields.

Pls. boost!

#macroevolution #phylogenetics #fossils #inference #paleobiology #ecology

Are p-values convoluted and arcane? Are confidence intervals hopelessly confusing? No! These ideas can be challenging to teach and learn, but they represent an invaluable way of thinking about scientific results. Once they're properly understood, they are more intuitive than they get credit for. Here is an attempt at a very brief explanation of why I love the logic of null hypothesis significance testing. [1/7]
#Statistics #Frequentist #NHST #Inference