lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

64
active users

#metrics

1 post1 participant0 posts today

A Comprehensive Framework For Evaluating The Quality Of Street View Imagery
--
doi.org/10.1016/j.jag.2022.103 <-- shared paper
--
“HIGHLIGHTS
• [They] propose the first comprehensive quality framework for street view imagery.
• Framework comprises 48 quality elements and may be applied to other image datasets.
• [They] implement partial evaluation for data in 9 cities, exposing varying quality.
• The implementation is released open-source and can be applied to other locations.
• [They] provide an overdue definition of street view imagery..."
#GIS #spatial #mapping #streetlevelimagery #Crowdsourcing #QualityAssessmentFramework #Heterogeneity #imagery #dataquality #metrics #QA #urban #cities #remotesensing #spatialanalysis #StreetView #Google #Mapillary #KartaView #commercial #crowsourced #opendata #consistency #standards #specifications #metadata #accuracy #precision #spatiotemporal #terrestrial #assessment

What's going on when some #universities jump more than 950% in one year on #metrics used in university #rankings? Are they gaming the metrics?
biorxiv.org/content/10.1101/20

"Key findings include publication growth of up to 965%, concentrated in STEM fields; surges in hyper-prolific authors and highly cited articles; and dense internal co-authorship and citation clusters. The group [of studied institutions] also exhibited elevated shares of publications in delisted journals and high retraction rates. These patterns illustrate vulnerabilities in global ranking systems, as metrics lose meaning when treated as targets (Goodhart’s Law) and institutions emulate high-performing peers under competitive pressure (institutional isomorphism). Without reform, rankings may continue incentivizing behaviors that distort scholarly contribution and compromise research integrity."

#Academia
@academicchatter

bioRxiv · Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University RankingsGlobal university rankings have transformed how certain institutions define success, often elevating metrics over meaning. This study examines universities with rapid research growth that suggest metric-driven behaviors. Among the 1,000 most publishing institutions, 98 showed extreme output increases between 2018-2019 and 2023-2024. Of these, 18 were selected for exhibiting sharp declines in first and corresponding authorship. Compared to national, regional, and international norms, these universities (in India, Lebanon, Saudi Arabia, and the United Arab Emirates) display patterns consistent with strategic metric optimization. Key findings include publication growth of up to 965%, concentrated in STEM fields; surges in hyper-prolific authors and highly cited articles; and dense internal co-authorship and citation clusters. The group also exhibited elevated shares of publications in delisted journals and high retraction rates. These patterns illustrate vulnerabilities in global ranking systems, as metrics lose meaning when treated as targets (Goodhart’s Law) and institutions emulate high-performing peers under competitive pressure (institutional isomorphism). Without reform, rankings may continue incentivizing behaviors that distort scholarly contribution and compromise research integrity. ### Competing Interest Statement The author declares that he is affiliated with a university that is a peer institution to one of the universities included in the study group.

New study: #ChatGPT is not very good at predicting the #reproducibility of a research article from its methods section.
link.springer.com/article/10.1

PS: Five years ago, I asked this question on Twitter/X: "If a successful replication boosts the credibility a research article, then does a prediction of a successful replication, from an honest prediction market, do the same, even to a small degree?"
x.com/petersuber/status/125952

What if #LLMs eventually make these predictions better than prediction markets? Will research #assessment committees (notoriously inclined to resort to simplistic #metrics) start to rely on LLM replication or reproducibility predictions?

SpringerLinkChatGPT struggles to recognize reproducible science - Knowledge and Information SystemsThe quality of answers provided by ChatGPT matters with over 100 million users and approximately 1 billion monthly website visits. Large language models have the potential to drive scientific breakthroughs by processing vast amounts of information in seconds and learning from data at a scale and speed unattainable by humans, but recognizing reproducibility, a core aspect of high-quality science, remains a challenge. Our study investigates the effectiveness of ChatGPT (GPT $$-$$ - 3.5) in evaluating scientific reproducibility, a critical and underexplored topic, by analyzing the methods sections of 158 research articles. In our methodology, we asked ChatGPT, through a structured prompt, to predict the reproducibility of a scientific article based on the extracted text from its methods section. The findings of our study reveal significant limitations: Out of the assessed articles, only 18 (11.4%) were accurately classified, while 29 (18.4%) were misclassified, and 111 (70.3%) faced challenges in interpreting key methodological details that influence reproducibility. Future advancements should ensure consistent answers for similar or same prompts, improve reasoning for analyzing technical, jargon-heavy text, and enhance transparency in decision-making. Additionally, we suggest the development of a dedicated benchmark to systematically evaluate how well AI models can assess the reproducibility of scientific articles. This study highlights the continued need for human expertise and the risks of uncritical reliance on AI.

Excellent: "More than 100 institutions and funders worldwide have confirmed that research published in #eLife continues to be considered in hiring, promotion, and funding decisions, following the journal’s bold move to forgo its Journal Impact Factor."
elifesciences.org/for-the-pres

PS: This is not just a step to support eLife, but a step to break the stranglehold of bad metrics in research assessment. For the same reason, it's a step toward more honest and less simplistic assessment.

#Academia #Assessment #JIF #Metrics #Universities
@academicchatter

eLifeMore than 100 institutions and funders confirm recognition of eLife papers, signalling support for open scienceConversations with research organisations offer reassurance to researchers and highlight growing momentum behind fairer, more transparent models of scientific publishing and assessment.
Continued thread

"Nobody owes anybody a job. The only reason anyone has one is because there was a problem at some point in that business that required a human to do some part of the work. Building on that, if that ever becomes not the case, for a particular person or team or department of human employees, the natural next action is to get rid of them."

Daniel Miessler: danielmiessler.com/blog/real-p

danielmiessler.comThe End of WorkMy big, depressing, and optimistic theory for why it's so hard to find and keep a job that makes you happy

John McPhee’s Short Essay About A 1972 Rockefeller Civil Service Award Winner - Dr. Luna Leopold, The First Chief Hydrologist At The USGS
--
jfklibrary.org/archives/other- <--JFK's 1958 speech at the Rockerfeller Civil Service Awards
--
usgs.gov/news/featured-story/l <-- shared details of Dr. Luna Leopold from the USGS
--
fiddlrts.blogspot.com/2020/04/ <-- shared details of The Patch, by John McPhee
--
eos.org/opinions/luna-b-leopol <--shared EOS retrospective of Dr Leopold
--
I was reading - with a mug of tea - The Patch (link above) by John McPhee, a decades-old favourite author of mine - and came across this short essay amongst many fine others…
#spatial #datalover #measurements #metrics #hydrology #water #fedservice #fedscience #LunaLeopold #hydrologist #crossdiscipline #RockefellerCivilServiceAward #USGS #JohnMcPhee #writing #readingforpleasure #mission #opendata #mapping #waterresources #watersecurity #wateruse #watermanagement
@USGS

Stanley Cup playoffs are just around the corner, starting this weekend.

I'll post some #data #visualization #dataviz for each team pairs before the playoffs start. First, we'll start with the eastern conference. Atlantic, we'll first have the battle of #Ontario with #Toronto #MapleLeafs vs #Ottawa #Senators. These are rolling 5 days #metrics for each team, allowing for easy comparison. #hnom #stanleycup #nhl #hockey

Follow the link for a fully interactive plot:
sports.dionresearch.com/nhl/TO

Glacial Lake Mapping Using Remote Sensing Geo-Foundation Model
--
doi.org/10.1016/j.jag.2025.104 <-- shared paper
--
HIGHLIGHTS:
• Proposed U-ViT model based on Prithvi GFM for multi-sensor glacial lake mapping.
• Achieved an F1 score of 0.894 on Sentinel-1&2, surpassing CNNs scoring below 0.8.
• Maintains strong performance with 50% less training data, proving efficiency.
• Excels in detecting small lakes (<0.01km²) and handling clouds and complex terrains..."
#GIS #spatial #mapping #glaciallake #GeospatialFoundationModel #satellite #Sentinel #GaoFen #remotesensing #earthobservation #model #modeling #climatechange #glacial #glacier #melt #melting #UViT #deepleanring #AI #framework #performance #metrics #opensource

The opening sentence of this: "one of the most popular academic social networking sites is [ResearchGate]"

🤔 This may be factually correct but it really doesn't feel like it anymore. I always felt that RG was, frankly, $hit, but accepted that it appeared popular among scholars. But today? I rarely come across it.

The Now-Defunct #ResearchGate Score and the Extant Research Interest Score: A Continued Debate on #Metrics of a Highly Popular #Academic #SocialNetworking Site doi.org/10.1515/opis-2024-0011

De Gruyter · The Now-Defunct ResearchGate Score and the Extant Research Interest Score: A Continued Debate on Metrics of a Highly Popular Academic Social Networking SiteAcademics might employ science social media or academic social networking sites (ASNSs), such as ResearchGate (RG), to showcase and promote their academic work, research, or published papers. In turn, RG provides usage statistics and performance metrics such as the now-defunct RG Score and the Research Interest Score (RIS) that offer a form of recognition about a researcher’s popularity, or how research is being used or appreciated. As part of a larger appreciation of how ASNSs contribute to knowledge sharing, in this article, the RG Score is reappraised, reflecting on why this metric may have been abandoned while reflecting on whether RIS is any better as an author-based altmetric. Similar to the RG Score, RG does not transparently indicate the precise equation used to calculate RIS, nor is any rationale provided for the weighting of its four factors (other reads, full-text reads, recommendations, and citations), which carry a relative weighting of 0.05, 0.15, 0.25, and 0.5, respectively. Ultimately, the responsible use of RG’s altmetrics lies in users’ hands, although caution is advised regarding their use to formally characterize or rank academics or research institutes.

OKRs identify the targets we're aiming for when it's too uncertain to precisely forecast, but we still need to give ourselves a progress or success measure to aim for.

The question I ask when creating Key Results is: "What empirically-measurable impact would be truly incredible to achieve, if everything goes right?" and "How will we know -- objectively -- that we're making progress toward our objective?"

Read more 👉 lttr.ai/AbnS0