101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

474
active users

#metrics

1 post1 participant0 posts today

When I suggest that people will game whatever metrics we put in place, I'm often met with shocked indignation. We would never game the numbers! And yet we do.

I took my car in for service this morning and I was asked if it was ok that they split the bill across two transactions. "You're being measured on number of cars through?" I asked. The answer was obviously yes, and this way I counted as two cars.

It's not just a matter that the numbers are now wrong, we have now introduced waste into the system. There were two credit card transactions rather than one. Two receipts instead of one. There was additional time for the workers to explain why they wanted to do it this way. Overall, this was complete waste, but because they felt they were being judged on the count of cars through, it was justified.

If people think that they'll be judged based on measurements then they'll game those. The more judgement, the more the numbers will be inaccurate, and the more waste will be introduced into the overall system.

You might think that I'm opposed to measuring anything then but that's not at all true. I'm a big proponent of measuring those things we want to improve. I'm just a realist and recognize that we have to design our measurements very carefully. If we measure the wrong things, or in the wrong way, we'll drive the wrong behaviours and that's our problem to solve.

A simple metric from #FDroid #metrics data: app downloads per week. Start with data from 1 of 2 servers for f-droid.org: http02, add hits for paths ending in ".apk". That gave about 2 million. Multiply by 18 (fronters + mirrors) and get ~36 mil app downloads a week.

import requests
hits = 0
r = requests.get(f'fdroid.gitlab.io/metrics/http0')
data = r.json()
for path in data['paths']:
if path.endswith('.apk'):
hits += data['paths'][path]['hits']
print('APKs', hits)

forum.f-droid.org/t/experiment

A Comprehensive Framework For Evaluating The Quality Of Street View Imagery
--
doi.org/10.1016/j.jag.2022.103 <-- shared paper
--
“HIGHLIGHTS
• [They] propose the first comprehensive quality framework for street view imagery.
• Framework comprises 48 quality elements and may be applied to other image datasets.
• [They] implement partial evaluation for data in 9 cities, exposing varying quality.
• The implementation is released open-source and can be applied to other locations.
• [They] provide an overdue definition of street view imagery..."
#GIS #spatial #mapping #streetlevelimagery #Crowdsourcing #QualityAssessmentFramework #Heterogeneity #imagery #dataquality #metrics #QA #urban #cities #remotesensing #spatialanalysis #StreetView #Google #Mapillary #KartaView #commercial #crowsourced #opendata #consistency #standards #specifications #metadata #accuracy #precision #spatiotemporal #terrestrial #assessment

What's going on when some #universities jump more than 950% in one year on #metrics used in university #rankings? Are they gaming the metrics?
biorxiv.org/content/10.1101/20

"Key findings include publication growth of up to 965%, concentrated in STEM fields; surges in hyper-prolific authors and highly cited articles; and dense internal co-authorship and citation clusters. The group [of studied institutions] also exhibited elevated shares of publications in delisted journals and high retraction rates. These patterns illustrate vulnerabilities in global ranking systems, as metrics lose meaning when treated as targets (Goodhart’s Law) and institutions emulate high-performing peers under competitive pressure (institutional isomorphism). Without reform, rankings may continue incentivizing behaviors that distort scholarly contribution and compromise research integrity."

#Academia
@academicchatter

bioRxiv · Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University RankingsGlobal university rankings have transformed how certain institutions define success, often elevating metrics over meaning. This study examines universities with rapid research growth that suggest metric-driven behaviors. Among the 1,000 most publishing institutions, 98 showed extreme output increases between 2018-2019 and 2023-2024. Of these, 18 were selected for exhibiting sharp declines in first and corresponding authorship. Compared to national, regional, and international norms, these universities (in India, Lebanon, Saudi Arabia, and the United Arab Emirates) display patterns consistent with strategic metric optimization. Key findings include publication growth of up to 965%, concentrated in STEM fields; surges in hyper-prolific authors and highly cited articles; and dense internal co-authorship and citation clusters. The group also exhibited elevated shares of publications in delisted journals and high retraction rates. These patterns illustrate vulnerabilities in global ranking systems, as metrics lose meaning when treated as targets (Goodhart’s Law) and institutions emulate high-performing peers under competitive pressure (institutional isomorphism). Without reform, rankings may continue incentivizing behaviors that distort scholarly contribution and compromise research integrity. ### Competing Interest Statement The author declares that he is affiliated with a university that is a peer institution to one of the universities included in the study group.