I got the usual discourse from #wikimedia about my proposal to replace #wikidata with my work on #nomunofu and #datae. Basically: that is too much work.

Also, #scheme is #scary.

That is another case where R&D in software engineering is pushed back to some mythical organization or organizations that will do all the "hard" work and others will be able to reap all the benefits... But at the same time, "building is just the start"...

I am very sad to read that especially from one the biggest organization of all the time.

The reply from #wikimedia is very confusing, but since I got similar feedback from other established companies, I guess that the min/max level of confused blurb that is acceptable #2020.

R&D would put wikidata at risk?! When #wikidata already struggle to with 10bn triples and they actually want more.

Also, another case of let's drop #bigdata #hadoop into the problem because we actually do not know...

Show thread

I am well aware of legacy and debt. But that is not a reason to not move forward. I have a profound respect for legacy and debt.

As such it must be clear to anyone with some reading comprehension skills to find out that #oracle #java #loom project is reaping the ideas of #scheme like continuations, tail calls or generators. Scopes are new, but nothing you could not do with concurrent ml already.

That is #scheme was the future already, a loom project is another proof of it.

Show thread

#oracle has spoken. My project is doomed to fail. That is why, I have more motivation that ever before to make it work even better.

#wikidata I will send you a tuple from space ;-)

Show thread

@zig I think it's hard for a big organization to trust a proposal of this kind because the risks of failure are high. AFAICT, the best way to show that the project is viable is to build a proof of concept: load a wikidata dump in your stack and host it for a few weeks, leaving time for people to try it out. Bonus points if you can stay in sync with the live data.

Follow

@zig You will only get a fraction of the queries that the official endpoint is exposed to, so you might not need servers that are that huge. It still costs money, sure, but I don't see how anyone is going to give you a big grant without that sort of guarantee.

@pintoch My deep thought is that #wikimedia is entrenched into #java tech and they are not ready to acknowledge the technical merits of other programming languages or even other approaches that the one they are used to.

They followed the micro-service meme, which has its merits but is clearly not an approach that is workable in a small scale. And usable wikidata at a small scale is a requirement for knowledge equity.

Again, I repeat it here, my approach works both in small and big settings.

@zig Great! Is your endpoint available anywhere? Is it kept in sync with ?

I would not say Wikimedia is entrenched in Java in general ^^ PHP is much more common there.
I don't find it outrageous that they stick to established languages and projects.

@pintoch

It is not available, and it is not kept in sync.

re Java and PHP: at least in french the word "establishment" has some negative subtext: promoting tradition in spite of progress.

Like I tried to explain in the thread, Java et al. are trying to copy #scheme, with the great help of money and... establishment.

Technical merits of Java or #PHP or Scheme are not merely #dev rumors.

Sign in to participate in the conversation
La Quadrature du Net - Mastodon - Media Fédéré

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!