lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

59
active users

#pipewire

0 posts0 participants0 posts today

#LinuxAudio for musicians be like:

Aaah I found this cool sound in my #Waldorf #Rocket #Synthesizer, it uses the #Arpeggiator.

Now I want to try a beat with it. Oh there is #Hydrogen #DrumSampler on my #Linux system!

Oh *beep*, most drum kits don't work and I don't know why but I don't wont to figure that out now.

Managed to patch Hydrogen MIDI to my outboard synthesizer. Now how to send MIDI Clock? Oh Hydrogen cannot send MIDI clock.

I need jack_midi_clock to get MIDI Clock from the general Jack Transport. Installing.

Okay now I'm in GMIDImonitor to find out the jack_midi_clock does not send any MIDI Clock, at least not on my system of #Pipewire + #Jack. Or perhaps it does but I can't see, since the synth's arpeggiator stops when transport plays and continues playing when transport stops.

This is getting weird.

Now I have lost my idea for the drum pattern, so I'm giving up on making music for today. Time is very limited anyways.

Domifare at Folklore

Even Rascob (aka BITPRINT) has been organising regular Live Code gigs are Folklore in Hackney. I played Domifare last night . . . sort of.

I’ve blogged before about pitch recognition being flaky. And it is, but usually within the first three minutes or so, the SuperCollider autocorrelation UGen does actually recognise the pitches and the piece runs.

Not last night. Instead, I spend 15 minutes playing the same four note phrase over and over and over again, in front of an audience.

What went wrong

  • Normally, when I play this, I have the mic right down in the bell, and it was up slightly higher this time, which may have caused problems.
  • When I practice this, I lip the pitch up or down slightly and this often works. This level of subtlety and control is extremely difficult after several minutes of failure on stage. Instead, my playing got messier and messier over the course of the set.
  • As I was trying to piece out, I couldn’t decide whether to use my old mouth piece, or my new one which is slightly more difficult with greater freedom. It didn’t seem to make a difference when I was practising, so I went for the newer, freer one, which might have been a mistake.
  • My sound card’s output was also extremely low, which is a problem I’ve had before with Pipe Wire. This was concerning during the tech setup, but turned out not to be an issue during the performance.
  • My laptop was sat on a stool in front of me which was not a distance that worked at all with my glasses. The screen was so blurry, I couldn’t properly tell what notes were arriving.

How to fix it

  • If I need consistent mic placement that’s down in the bell, I should make a mount that goes into the bell. The would be a cork-covered ring, with spokes, a mic suspended in the middle.
  • Flucoma would allow me to train a neural net to recognise a series of pitches as a cue. Because the tuba spectrum is weird and the mic is most sensitive at the weird points, I would probably have to do the training on stage. Would his be more tedious of 15minutes of failed command input? No.
  • Practising this piece is essentially training myself to be decipherable to the algorithm, which is subtly different than normal practice goals or technique. I did not get as much practice as I would have liked. I spent a lot of time building lip strength, with the idea it would make my notes clearer, but not as much time getting feedback from the autocorrelation algorithm. It may be more practice with the program would have helped. Or, the algorithm was confused by background noise or mic placement, perhaps it would have made no difference whatsoever.
  • Taking the bus with a tuba, a laptop, an audio card, cables, a mic, a mic stand and so forth is already a bit much, but it may be the case that I also need a laptop stand so I can ensure my computer being at a height and location where I can see it. Or my old reading glasses required more and more distance. Maybe a laptop on a stool is not a good use for them.

How I dealt with everything

I think my stage presence was fine, actually, except for when I was giving up at the end. I should have launched a few minutes of solo improv starting from and around the cue phrase. I’m going to practice this a bit, not that I expect the piece to fail like this again.

This was not my first performance of this piece. It went fine when I played it in Austria, 3 years ago.

Well, at least the failure of that piece was all that went wrong

Shelly Knotts and I were also meant to play some MOO, but discovered during the sound check that most of it wasn’t working, so we cut it from the programme.

Audience Reactions

People were generally positive. Multiple people used the word “futility” but with a positive intention. Which goes to show you can’t trust nerds.

To do

  • Incorporate Flucoma
  • Play this on Serpent because it’s more portable and I really do have more freedom of pitch.
Video by Shelly Knotts
#Domifare#gig#music

I can't use #Wayland with my #Linux system. Performance is terrible.

I have a 2021 Lenovo P17Gen1.
It has both an Intel P630 and NVIDIA RTX3000 Mobile GPU, running as a Prime pair.
It has a 4K eDP display, and two LG 4K displays, one connected via USB-C, and one via DP, via a TB3 Dock, all running at 60Hz.
It has 128GB of RAM, and 4TB of striped BTRFS SSD.
I am running the latest #Hyprland.

I tried running on #Arch, which cut the frame rate worse than half ANY time I connected an external 4K display ANYWHERE on either the laptop, or dock, DP, or USB-C. It refused to work via HDMI.

Switched to #Ubuntu and use Koolit's #Ubuntu installer for Hyprland. Performance is close to 60fps, but not quite. Stutters, and OBS runs at an average of 10fps, regardless of whether #PipeWire is in use, or not. Unusable.

I have been a #Linux user since September of 1991, and I can't even begin to know where the F**K to go to even diagnose this problem, due to the sheer number of variables.

Is it the NVIDIA drivers?
Is it Wayland?
Is it Hyprland?
Is it Pipewire?
Why is the performance better on #Ubuntu than on #Arch?
If I were to use Wayland, what COMPARABLE GPU would I use instead?
Do I just completely jettison using a laptop and build a workstation instead?
Why in the flying F**K can I not get stable vsync?!

I am posting this, because I am genuinely looking for knowledgable answers from knowlegable people, and I am _VERY_ concerned, that given the mass exodus from Xorg to Wayland, that I need to figure out something before I end up with a system configuration that is unusable.

-Thom

Replied in thread

@alisynthesis Its' an RME RayDAT, when I select the ALSA interface Reaper shows errors about being unable to open the input (presumably because #Pipewire has already grabbed it). I can access it under ALSA by manually typing 'pipewire' in the driver selection in #Reaper but then I only get 2 i/o (out of 32) showing in QPWEGraph and the same gronky aliased noise with buffers <1024

I’ve published Part 3 of “I Want to Love Linux. It Doesn’t Love Me Back.”

This one’s about the so-called universal interface: the console. The raw, non-GUI, text-mode TTY. The place where sighted Linux users fall back when the desktop breaks, and where blind users are supposed to do the same. Except — we can’t. Not reliably. Not safely. Not without building the entire stack ourselves.

This post covers Speakup, BRLTTY, Fenrir, and the audio subsystem hell that makes screen reading in the console a game of chance. It dives into why session-locked audio breaks espeakup, why BRLTTY fails silently and eats USB ports, why the console can be a full environment — and why it’s still unusable out of the box. And yes, it calls out the fact that if you’re deafblind, and BRLTTY doesn’t start, you’re just locked out of the machine entirely. No speech. No visuals. Just a dead black box.

There are workarounds. Scripts. Hacks. Weird client.conf magic that you run once as root, once as a user, and pray to PipeWire that it sticks. Some of this I learned from a reader of post 1. None of it is documented. None of it is standard. And none of it should be required.

This is a long one. Technical, and very real. Because the console should be the one place Linux accessibility never breaks. And it’s the one place that’s been left to rot.

Link to the post: fireborn.mataroa.blog/blog/i-w

fireborn.mataroa.blogI Want to Love Linux. It Doesn't Love Me Back: Post 3 – Speakup, BRLTTY, and the Forgotten Infrastructure of Console Access — fireborn

#Linux / #pipewire annoyance; maybe you've seen this and know how to fix it?
Using the GNOME volume slider, I mute the volume of my motherboard's built-in audio device.
When I then I unplug the DisplayLink hub that is plugged into my computer, pipewire (or #GNOME, or ???) unmutes the built-in audio device. I don't want that!
Until recently it stayed muted as intended. Maybe it broke in pipewire 1.4.2, released to #Debian testing in mid-April?

Continued thread

#Phosh's #MobileSettings app now allows to select and disable the overview wallpaper so there's no need reach for the command line when you want the classic black back.

The apps feedback panel allows to adjust role based volumes in case #PipeWire's role base routing is in effect. This is picked up automatically when you enable role based routing.

3/x

is it too late to join the #pipewire party? i hope not, or else my #flx1 will be sad

all applications that use pipewire can finally utilize the camera (adding to our list of v4l, qtmultimedia and android apps)

one issue we are facing is that aperture is not very happy with the back cameras, so applications like #GNOME snapshot or authenticator will have the preview flipped. captured frames are surprisingly not flipped tho:
gitlab.gnome.org/GNOME/snapsho

I'm not sure how many people on here run archlinux, but recently, a contribution I made to speech dispatcher, namely the pipewire module I talked about a while ago on here, got in the arch package.

Before wooping with joy and other similar feelings because something I made, with the help of the maintainer of course, got into a mainline distribution people actually use, not just in my computer, it'd be really nice if I could know whether it made any tangible difference.

So then, here are afew of the things I hoped to come true with the introduction of a pipewire native audio module:

* less battery usage by speech dispatcher
* the ability to be able to speak normally, even in high stress environments, such as low-memory and so on
* lower cpu consumption
* lower latency when speaking and being interrupted often, the case of screenreaders fits perfectly here

Now, I'm absolutely no statistician, and I don't know how to even begin to measure any of this. So, here is me asking help from the wider fediverse.

For the people who know how any of this part works, can you take afew measurements of the performance of speechd, both with and without the pipewire support enabled? I'm primarily interested in the situations I mentioned before, especially the latency, does well under load and consumes less resources aspect.

For anyone who sees this, can I please have a boost? :p it would be incredibly helpful for me to know if this improved anything, even if one bit, and if it was worth going on and on about latency improvements regarding pipewire. I tried to do local tests with sound first, the results were positive, and so was my experience with using speech dispatcher with this module enabled. It could be because of my older computer, but I definitely feel a difference, although not a huge one. If there are other bottlenecks, I'm not sure they're in audio anymore, unless my code is flawed, which could definitely be and a deeper review on that by someone who knows audio stuff better than me should probably be done at some point, but I'm trying to get an inkling into the benefits this had, if any.
#linux #pipewire #archlinux #statistics

So here's the thing: I'd really love to use #Ardour for my next album, cause I don't want to continue working with something from a company sucking up to America's regime

Only problem is: I can't

I'm using 3 different audio interfaces via #pipewire 1.2.4, and Ardour keeps on forgetting USB audio and MIDI connections over restarts (or suspends), forcing me to make each connection again, which sucks.

First person to solve this for me gets a present and a special credit in the liner notes.
(1/2)