• Pop!_Planet is still very much under development. Data for the wiki is being sourced from the Arch Linux and Ubuntu wikis, along with a bunch of completely unique content specific to Pop!_OS, and sourcing, converting and updating that content takes time; please be patient. If you can't find what you're looking for here, check the Arch Linux and Ubuntu wikis.
  • Happy New Year!

    I'll get straight to the point.

    When I started Pop!_Planet, I launched it because I saw a need for a centralized community for Pop!_OS. To be frank, I never expected the level of popularity it has achieved. Over the last year, we have gone from under 50 users, to almost 400 users. That's awesome! However... it also comes with a downside. We are rapidly running out of disk space on our server, and the bandwidth costs go up every month.

    Pop!_Planet is not affiliated with System76 in any way, and is funded completely out of pocket. From day one, I said that I'd never use on-site ads (I hate them as much as you do), so the only monetization we get is through donations. Right now, the donations we receive don't even cover our overhead.

    I know that most users will ignore this message, and that's ok. However, if even a few of our users are willing and able to donate a few dollars to help offset our expenses, it would be greatly appreciated.

    Support Pop!_Planet

    Thank you for your time,

    Dan Griffiths
    Pop!_Planet Founder

Pro Audio Pop!

ayoungethan

Member
Jun 19, 2019
21
0
6
35
Hi folks,

I just bought a Darter Pro which I use primarily for writing and pro audio work. As per:

as per
I figure Pop!_Planet might be a great place to share what I've learned about optimizing Pop!_OS for pro audio (low latency) work, and perhaps contribute some viable suggestions to the OS roadmap to improve the "out of the box" experience for others.

I have been with Ubuntu linux and various related derivatives since 2009, when my Dell XPS m1330 suffered from Windows Vista cannibalizing itself. I was already running Open Office, and I decided to take a deep dive into FLOSS on the OS level. I have been in over my head ever since, and remain very excited about the collaborative and performance potentials that open source software and related community structure provides.

Please don't hesitate to reach out to me if you are interested in collaborating on improving the low latency desktop experience for Pop!_OS users
 

ayoungethan

Member
Jun 19, 2019
21
0
6
35
Thanks! Would love to know what you have found...I have most of my stuff organized and will start posting it shortly.

UPDATE: still waiting on ability to edit the wiki
 
Last edited:

ayoungethan

Member
Jun 19, 2019
21
0
6
35
NOTE: My original intention was to post this to the Pop!_Planet wiki, but I have been unable to edit the wiki due to an unresolved technical issue: https://pop-planet.info/forums/threads/unable-to-edit-wiki.251/ It is posted in 2 parts due to character restrictions.

My current thought is to divide this article into three major sections:
1. concepts: basic ideas behind low latency audio performance optimization
2. setup process: concrete steps users can take to optimize their systems
3. System76 recommendations: based on aggregate user experience, research and market analysis, specific recommendations for System76 to implement to enhance low latency performance or make the setup process more accessible and streamlined.
4. additional resources (further reading)

The article currently lacks #2, the setup process. I have this written for my system (a System76 Darter Pro) and will post it shortly.

==Concepts==
This guide will cover how to make the most essential optimizations in Pop!_OS for low latency professional audio performance on a modern system, while still leaving the system in good shape to perform admirably with regards to battery life and throughput for more standard desktop tasks.

Many, if not most, of these concepts also apply to other operating systems. The performance optimizations will also apply to most GNU/Linux-based operating systems.

If you don't care about understanding the concepts underlying the performance optimization of a digital audio workstation (DAW), feel free to skip to the section with the specific performance tuning steps.

This tutorial was written by a lay person for lay people. There are gross generalizations that some experts may dispute. It is my hope that those generalizations are still practical and relevant to the task of having a basic understanding of latency and tuning a system for better latency.

===Computer performance: An introduction===

When we talk about desktop computer performance, we generally discuss two things:
Throughput: how much data or how many computations a computer can handle over a given timeframe
Energy efficiency: how much energy the computer uses to do its work

We typically see throughput vs energy efficiency as a tradeoff, and this is true for systems under constant heavy load (such as servers). However, for systems under highly-dynamic loads (heavy one moment, light the next), the "race to sleep" philosophy of optimizing power saving and performance comes into play, whereby a computer's ability to maximize its performance for discrete (dynamic) workloads actually saves power by allowing it to maximize the time spent in low power states (see https://en.wikichip.org/wiki/race-to-sleep).

However, in professional audio systems (often called Digital Audio Workstations, or DAWs), audio latency becomes an issue. We can define latency as the amount of time it takes for a signal or data to travel through a path, usually measured in milliseconds. The word "jitter" describes variations in latency over time. Professional audio systems not only need low latency (generally defined as sub-10ms latency paths), but also precise latency with negligible jitter. For instance, consider two computers with "avg 10ms latency," with 5 measurements taken over the course of a second (1000ms). Computer 1 might measure 9ms, 9ms, 10ms, 11ms, 11ms, which averages to 10ms, but with 2ms (+/- 1ms) of jitter. Computer 2 might measure latencies at 5ms, 5ms, 10ms, 10ms, 20ms, which also averages 10ms, but with 15ms of jitter. This jitter makes the signal highly unpredictable, which means it becomes very difficult or impossible to precisely synchronize, route and process various audio streams together, and can cause buffer overruns (xruns) and data loss.

Stable latency is much more important for recording than low (but unstable) latency. It doesn't matter if the data takes a little longer to get recorded, as long as it all gets recorded. Low latency is much more important for live response. Likewise, a high (but constant, low jitter) latency is not always a bad thing, for example, when playing back audio or mixing audio together, because it can all remain in sync. However, low latency is necessary for live, real-time control of digital instruments, multitracking via overdubs, and for monitoring live mixes in real time without perceptible lag or echo. Unless you are doing one of these three tasks, you probably don't need a low latency setup.

Thus, for pro audio and other related multimedia applications, we have three performance variables to optimize and balance: throughput, energy efficiency, and latency (average and jitter, both).

===Optimization===
Relatively large optimizations in latency and jitter can be made with relatively small compromises in throughput and/or energy efficiency, allowing for well-rounded system performance in a variety of contexts. We just need to start considering latency (amount and jitter) alongside throughput and energy performance variables. Apart from making systems more versatile, optimizing performance along these three parameters also further improves multimedia performance, as professional audio work can often make significant demands of throughput performance in a low latency context.

For example, hyperthreading represents a classic tradeoff in throughput vs latency. According to Intel, "overall processing latency is significantly increased due to hyper-threading, with the negative effects becoming smaller as there are more simultaneous threads that can effectively use the additional hardware resource utilization provided by hyper-threading." Hyperthreading may increase context-switching, which is resource and time-intensive. By disabling the "virtual cores" the system routes information that it has according to the physical cores available, which decreases context switching and latency, but also lowers the potential throughput performance. See http://techblog.cloudperf.net/2016/07/measuring-intel-hyper-thread-overhead.html, which concludes "HT improved overall throughput by 25% but at a cost of higher latency" for each individual thread. More gets done, but each individual thread takes longer to complete: 8 threads in 1.6 seconds (HT) vs 8 threads in 2 seconds (non-HT), and 1.6 seconds per thread (HT) vs 1 second per thread (non-HT). Low latency currently depends on single thread throughput and response time rather than overall multi-threaded throughput, so there is a trade-off between responsiveness and throughput performance when dealing with highly time-sensitive applications, such as audio or multimedia throughput and synchronization.
CAVEAT: Hyperthreading technology exposes computers to threats. Mitigations for those threats reduce performance. By disabling hyperthreading, users can also safely disable the side-channel timing attack mitigations that exploit hyperthreading. (https://threatpost.com/intel-zombieload-side-channel-attack-10-takeaways/144771/). In addition, disabling HT may reduce heat and power consumption. While certain workloads (such as transcoding, compiling, etc) will take longer to complete, discrete tasks will complete more quickly, with less overhead. This absolutely applies to low latency throughput performance. For example, heat-limited CPUs (such as in laptops) will be able to maintain higher clock speeds for longer, which may translate into real-world low latency DSP performance gains, with higher throughput at lower latency.

The ELK Operating System (https://www.mindmusiclabs.com/) represents an extreme version of latency optimization for dedicated use in embedded hardware. In addition to optimization such as those in this guide, ELK OS strips away many elements of a desktop operating system in order to acheive very low overheads, extremely low latencies and high priority of audio threads. This optimization also means greater throughput potential as the OS does not "get in the way" of audio processing much. This is fantastic for dedicated, embedded hardware systems: it allows for long-service hardware, for example, that can be upgraded or modified via internet or other data connection. However, the sacrifices in the OS design also make it inappropriate for use in multi-tasking, e.g. desktop or laptop systems (see FAQ at https://www.mindmusiclabs.com/#collapse7).

===User Psychology===
Mac OS engineers learned early in the design of the Mac OS X audio system (around the year 2000) that humans are very sensitive to missed samples, because we are very sensitive (even hard-wired) to rhythm and rhythmic flow. You've all experienced it: The music plays, and then it stops. Or stutters. Or pops. It feels emotionally jarring. The same with laggy audio that runs smoothly, but out of sync with other audio or video. It feels confusing and distracting. It creates a significantly-negative emotional experience. Stable, reliable low latency setups are meant to prevent such occurrences at the source, both in the product (the audio itself) and in the process of producing that audio. Thus, there are three areas where glitch-free audio is really important:
1. production process (to create a glitch-free)
2. product result (to allow for glitch-free)
3. playback

Glitch-free = no pauses, skips, drop-outs, pops, loss of synchronization or other unpleasant artifacts in the file, its creation process or its playback.

The first two are the concern of pro audio production. The last is the concern of everyone. In the Linux/Windows world, it has typically been solved with large and/or multiple buffers, resulting in high latencies inappropriate for pro audio work. This has created some fragmentation in computing markets. Apple historically capitalized on this fragmentation by prioritizing professional audio-visual production, which appears to monetize disproportionately. Rather than focusing on growing the biggest user base (Windows/Linux), Apple carefully chose to occupy and dominate a very valuable but relatively marginal niche, and grew from there to a dominant share of the computing market based on a market perception of "quality" user experience.

On the flip side, humans are relatively insensitive to small absolute differences in throughput, even if the differences are relatively significant. For example, it might sound good that a program loads or a task finishes "5 times faster." But 5 times faster than what? If the slower task completion time is 1 second, then the faster completion time is .2 seconds (200ms). After a certain threshold, such a difference in performance doesn't leave a significantly negative impression on a user in most cases. Those differences have significance only in specialized niches that benefit heavily from improved throughput. But many of the tradeoffs in performance tuning for latency are much less significant, along the lines of a difference of, say, a file encoding process taking 20 seconds instead of 18 seconds. An internet benchmark concerned only with throughput can obsess over and inflate the importance of this difference and turn it into a difference of socially-constructed importance, but the actual psychological impact on the user is minimal. Lastly, even in cases where the difference is not minimal, it is still not very noticable to the end user. The user doesn't sit around twiddling their thumbs to wait for an intensive computing operation to finish. They walk away and do something else, and come back. Or they stay on the computer, and multitask with something else while they wait. In this case, they want the computer to remain responsive and reliable. They want to have a pleasant, glitch-free experience while working and waiting for the other task to finish, and they want the other task to finish reliably, without errors or glitches.

At the heart of this, we tend to place a psychological priority on reliability, and a large part of our experience of reliability involves latency. A system that crashes, stutters or craps out unpredictably breaks our trust. And no amount of throughput performance can regain or re-establish broken trust. Imagine that you have a mechanical (powered) hammer capable of hammering several times faster than a manual hammer, and you are building a stick-framed house with nails. Now imagine yourself hammering away at those nails with that new gadget. Now imagine that the hammer head falls off at seemingly-random points in the hammering process. Sometimes it just falls off, sometimes it goes flying. Either way, it drastically slows your progress, because it interrupts your workflow and concentration, or leads you on a wild goose chase to fix the problem, and can even permanently mar a project with mistakes (mis-hit wood and nails), creating more work on the back-end. After a bit, you stop trusting the hammer. You feel uneasy and distracted around it, and your ability and willingness to use it effectively decreases. Now, imagine if you could turn down the speed of the hammer just a little bit, and make the hammer much more reliable and prevent the problems and trust issues that arise. Even though it is "slower," you still finish the house faster than if you tried to keep hammering at top speed. Unfortunately, internet most popular internet benchmarks do not take into account these factors of such immense real-world importance to end users, skewing and misdirecting computer hardware and operating system design and performance tuning. Apple succeeds in the marketplace in spite regularly losing out to these so-called "performance tests" to similarly-spec'd Windows or Linux hardware.

===Low latency throughput===
We can't talk about latency or throughput without talking about their combined need in professional audio. Computer throughput determines how much signal routing and processing it can do before overloading, which leads to audio glitches or dropouts (i.e., discarded audio signals that never made it from source to sink). A DAW will be limited in the number of audio streams it can record, process, route and mix at a given latency based on its digital signal processing (DSP) capacity. A DAW's DSP performance is heavily dependent on its CPU frequency. Higher frequencies, faster processing, more DSP capacity.

Traditionally, CPU performance scaling (sleep states and frequency modulation) allowed CPU performance to change dynamically based on demand. However, CPU performance scaling currently has three limitations that make it unsuitable to dynamically manage CPU performance for low latency contexts:
1. It is relatively insensitive to a processor's DSP load
2. It adds overhead (meaning it takes additional time, energy and CPU cycles to change between faster and slower frequencies, which lowers overall throughput capacity and can add latency and jitter)
3. It occurs much too slowly (e.g., in 10-30ms) to be of use in a context demanding throughput performance at low (<10ms) latencies.

For these reasons, by the time a CPU scales to a faster frequency based on demand, a DSP overload (and thus audio glitches such as pops, crackles, or dropouts) has already likely occurred. Said another way, the CPU scales to a past, rather than current or future demand, and fails to actually meet that demand. By extension, we can conclude that reliable throughput performance of a DAW comes from its lowest sustainable operating frequency, not the fastest theoretical frequency that the CPU might ramp up to based on a past demand. For example, if a CPU is set to have a baseline frequency of 1200mhz but can scale up to 2400mhz "on demand," the more accurate measurement of low latency throughput comes from the 1200mhz number, assuming that the computer can sustain that frequency indefinitely based on its longest load. CPU cooling matters: if a CPU begins to overheat, it will down-throttle its speed. This is a common problem in high-spec'd "ultrabooks," which have impressive hardware capacities on paper and transient load response benchmarks, but often struggle to sustain high throughput due to poor cooling. Cooling depends on the ability to remove heat, which depends on some combination of power (fan, water pump) or circulation (space). Ultrabooks often compromise cooling for light weight, small size and battery life. This is why desktops and larger laptops often perform better than ultrabooks as DAWs than similarly-spec'd laptops: they can sustain higher workloads or minimum frequencies for longer.

We can't currently depend on the "race to sleep" philosophy to provide both power savings and throughput performance within the minute tolerances of a system we are also asking to reliably handle (process and distribute) audio and related data streams at low latencies. Until CPU frequency scaling can occur in microseconds instead of milliseconds, and scaling can occur based on DSP (rather than overall CPU) load, the only way to have both high throughput and reliable low latency is to increase a processor's baseline frequency, which decreases its energy efficiency. This is not needed in all contexts -- only when relying heavily on DSP, e.g., when running many plugins or high quality digital instruments at high polyphony and high sample rates in real time.

Unfortunately, a user must either anticipate the amount of DSP capacity needed, and set their lowest CPU frequency accordingly (all the way up to maximum frequency), or leave the baseline frequency "as is" and stay within the limits of the current baseline frequency to avoid audio glitches by reducing the CPU load for the session (reduce the number of plugins, audio streams or related quality settings). See https://github.com/falkTX/Cadence/issues/250
Caveat: Disabling hyperthreading may make CPU scaling more relevant. The reason for this is that DSP is done by the CPU Floating Point Unit (FPU). Each core has one FPU. Hyperthreading can cause two threads to share a single FPU. This sharing can cause a mild resource conflict and context switching in the FPU between each thread. Regardless, the CPU may indicate it is under-utilized, whereas the FPU may be utilized to its maximum potential, causing overloads and dropouts (stealth spikes in FPU latency). Disabling hyperthreading reduces the CPU to one FPU per core (vs one FPU per two [virtual] cores). As a single core without hyper-threading doesn't need to share FPU resources, and context switching won't occur, DSP utilization may be reported more accurately as overall CPU utilization, causing CPU scaling to occur more proactively regarding DSP load. But it may still occur too slowly to be of real-time use.

Some operations require only low latency and minimal CPU processing capacity, such as recording raw streams of high quality audio. The CPU has to work very little in this situation because it is mostly directing and distributing data streams, rather than processing (changing) them in real time. In such a situation, a relatively slow (energy efficient) CPU speed will suffice. Likewise, high quality digital signal processing of even a few data streams can easily maximize the use of a fast CPU operating at peak frequencies. In such situations, a DSP limit means a tradeoff between either (high) audio quality or (glitch-free) audio reliability, and benefits heavily from maximized DSP capacity.
 

ayoungethan

Member
Jun 19, 2019
21
0
6
35
===Realtime Priorities===
Several system configuration steps contribute immensely to low latency performance, regardless of CPU performance or settings.

Low-latency kernel configuration: The kernel is the core of the operating system. It is one of the most fundamental software interfaces with the hardware. GNU/Linux (including Pop!_OS), Windows (NT) and Mac (Mach) all use kernels. Pop!_OS is just one of many GNU/Linux operating systems. The kernel defines some of the most fundamental performance characteristics and focuses of the operating system.

Windows and MacOS only use one kernel. MacOS is already heavily optimized to prioritize glitch-free audio, in large part because it was designed around that niche market as Apple's last remaining user base in 1998, giving the user land audio library team priority in setting the OS development agenda. The Windows NT kernel and Linux kernels had no such focus. However, Linux kernel development occurs relatively rapidly. Because it is open source, it gets regularly forked from the "mainline" version, and modified for specific operating conditions and parameters. Sometimes those modifications are merged back into the mainline when they are perceived as generally-beneficial. However, many operating systems based on the Linux kernel (such as Pop!_OS) maintain more than one kernel version at a time, for use in different circumstances. In the specific case of Pop!_OS, we have both the linux-generic and linux-lowlatency kernels to consider.

The current linux-lowlatency kernel line is thought to contain the most important optimizations for latency-sensitive computing, still making it appropriate for most general computing circumstances. The differences between the -lowlatency and -generic kernel lines are few but key [insert kernel_diff.txt].

Specifically, the -lowlatency kernel allows for full pre-emption of "lower priority" threads or processes with "higher priority" threads or processes. In lay terms, threads with a high priority attached to them can "cut in line" to ensure that they get executed on-demand. But this cutting in line also has administrative overhead to it, which lowers overall throughput (the number of threads that can get through the queue in a given timeframe). In the -generic kernel, pre-emption is merely "voluntary," meaning that "the running process declares points where it can be preempted (where otherwise it would run until completion)" https://stackoverflow.com/a/5741721/11705382

Second, the -lowlatency kernel operates at a higher tick rate, 1000hz vs 250hz (times per second). That means that the -lowlatency kernel has four times the opportunity per second to interrupt its current queue to allow higher priority processes to cut in line. This again makes the system more responsive to those higher priorities, but at the cost of throughput and energy efficiency. The "NO_HZ" parameter means that a CPU can stop its timing when it's not needed, which allows the CPU to rest in a lower power state. Both kernel lines have this feature enabled by default.

In the near future, no_hz_full (adaptive ticks) may allow additional performance improvements (greater power saving and throughput with less jitter in low latency contexts).

====RTIRQ script====
By using a -lowlatency (or similarly optimized) kernel line instead of the -generic kernel line, we can then program the operating system to make use of those configurations to label certain threads with high priority, interruptable, "cut-in-line" privileges. In our case, we want to configure the computer to prioritize audio and related data threads and processes. We should note that this does not raise the priority of an entire audio-based software program. For example, the user interface and any non-audio data threads and processes should still operate at nominal priority, which means that the system will temporarily ignore them, if necessary to ensure that any audio-related processing gets completed on schedule. Some visual indicators may lag or even freeze to retain priority and glitch-free reliability of audio streams. This is not a malfunction, merely the computer maintaining appropriate priorities when its processing resources are under high demand. It makes sense: do you want a temporary and relatively inconsequential display glitch, or a permanent glitch in audio that might completely ruin a take or track?

There are many ways to accomplish the task of elevating privileges and priority of audio threads. RTIRQ (https://www.rncbc.org/drupal/node/1979) plus a pre-emptable kernel is probably the most accessible and widely-used strategy. The startup script (which also runs on resume from suspend) simply reorders all audio-related IRQ threads at high priority on the realtime scheduler. It does not work with the voluntary pre-emption of the -generic kernel.

As a result of the above -lowlatency RTIRQ configuration, systems can often safely have background processes, such as WiFi, ethernet or bluetooth connections, active and running without fear that they will "fight" for priority with audio threads and processes. Audio will still come first, which might throttle and slow down the efficiency of the lower priority threads and processes, but still allow them to function. Without this step, users can notice severe audio performance degradation while running network services, for example, under voluntary pre-emption or without audio threads gaining high priority (just under system critical threads). Likewise, turning off or disabling networking services and other background processes can yield significant performance increases without needing to further tweak the setup. This is more of an ad hoc approach to low latency configuration.

====Userspace configuration====
The Cadence suite helps with some ad hoc configuration, as well as jack server configuration. Both qjackctl and Cadence automatically install jack2 (aka jackdmp). When JACK is installed, it automatically modifies /etc/security/limits.d/ to allow audio threads to access pre-emptive realtime priorities with both the -generic and -lowlatency kernel lines.

Both processes and threads can be given elevated privileges and priorities: https://stackoverflow.com/a/200543/11705382
In Linux, we can elevate the priority of audio threads via the audio group: https://wiki.ubuntu.com/Audio/TheAudioGroup

You can set the realtime priority of the Jack sound server via its runtime configuration.

==Performance Tuning Steps==
These are actual steps that users can take to optimize their standard Pop!_OS desktop for multimedia production.

Disable hyperthreading for -lowlatency kernel
maxcpus=[4]
or nosmt
https://coreos.com/os/docs/latest/disabling-smt.html

"I highly recommend disabling hyperthreading if one wants work to be done in the order it’s queued with minimal overhead" This applies to all time-sensitive processing tasks, such as low latency DSP.

==Software Options==

The Pop!_OS repositories contain a lot of great multimedia production tools already, by way of the Ubuntu repositories. However, many of those software are outdated. This occurs in three ways: 1. A software was abandoned and not maintained, but still included in the repositories. 2. A software is still maintained, but not up to date in the repositories. 3. A newer piece of software doesn't make it into the repositories for quite a while.

This is not a problem specific to Pop!_OS or Ubuntu, but to the entire GNU/Linux world. Maintaining a repository against a system architecture and distribution packaging standard requires a lot of work, worse that it is heavily-duplicated work. For example, Ubuntu and Fedora are large distributions with lots of software, but maintain completely parallel repositories under different packaging system standards. This means that a program is often compiled from source and packaged several times, across several different GNU/Linux systems. This is wasted effort that could be used elsewhere, such as in providing support, bug fixes, adding features, documentation, etc for software that is not a core, integrated part of the OS. On top of this, the extra effort means that software slips through the cracks or even outright conflicts with core OS dependencies, and cannot be included or updated. However, some have hypothesized that this software packaging variability also makes GNU/Linux less susceptible to security issues such as various forms of malware, as a Fedora-packaged program will not run in a Debian-packaged environment in some of the same ways that a Mac OS program will not run in a Windows environment. I don't know to what extent a unified software infrastructure would pose a real-world, practical threat to GNU/Linux security.

KXStudio repositories: This probably represents the best effort in the Debian world to provide an up-to-date multimedia software repository. Even still, it is maintained by one person, who also maintains an entire multimedia distribution (KXStudio), and suffers from the same challenges as other repositories.

Future: Flatpak

The move of desktop software to flatpak would drastically shrink the size and number of repositories needed. It would also shrink the amount of time and effort in maintaining those repositories, restricting it to what makes the OS unique. Any "add-on" software that could run on any OS doesn't necessarily need tight, strict integration into the core OS. That means software could be included and updated as it becomes available, without the overhead or challenges of aligning package dependencies, or compiling packages for different distributions. If this does not pose significant real-world security or performance concerns, it appears to be a great opportunity to solve many practical issues with software. The additional time and effort saved in software packaging and maintenance could be used for OS or software development and maintenance, improving the overall OS quality.

==System76 Recommendations==

See https://pop-planet.info/forums/threads/pro-audio-suggestions.250/ as they follow this discussion. There is ample reason to believe that multimedia-tuned computing products monetize disproportionately. Combined with dissatisfaction with Apple and advances in available software and hardware for GNU/Linux systems, it may make sense for System76 to invest some minimal resources so that it can promote itself as "multimedia workstation ready."

Hardware: see http://manual.ardour.org/setting-up-your-system/the-right-computer-system-for-digital-audio/

There is a lot of opportunity to optimize hardware based on component design, selection and driver development, especially as System76 gains greater control over its hardware selection and design.

Make it easier to disable hyperthreading. This would be especially beneficial if the configuration could be tied to the kernel (i.e., instead of a universal BIOS setting). This way, a user can easily switch at boot time between a system optimized for low latency performance and a system optimized for generic desktop use and race to sleep/maximum throughput.

Linux kernel: technically linux-lowlatency is available in the repositories. But kernelstub needs work to accommodate more than one kernel at once. ALTERNATIVE: allow users to swap out linux-generic for linux-lowlatency by making the System76 driver dependent on either package.

===Software===

Maintain up-to-date packages for core software: Cadence suite, RTIRQ, JACK2, and perhaps some core DAW software such as Qtracktor, Ardour* (see http://manual.ardour.org/setting-up-your-system/platform-specifics/ubuntu-linux/), and Non-DAW, and CALF plugins.

Transition to Flatpak to offload the burden of packaging and repository maintenance and ultimately give users greater choice in software options.

In the mean time, poach from up-to-date repositories, such as KXStudio, AND/OR create a volunteer-driven community repository that allows volunteers to contribute to maintenance of non-essential packages for Pop!_OS

System76-power: https://github.com/falkTX/Cadence/issues/250

System76-power configurations do not appear to make sense to me. "Battery Life" is merely a "conservative" governor that violates "race to sleep" so actually often results in relatively poor battery life. Really only two configurations are needed:
1. Fully automatic performance regulation at left (ie., a combination of TLP and powersave governor with full CPU frequency scaling, ie, same as System76-power "balanced" setting with TLP), and
2. defeat of certain power saving features at right with a performance governor that increases the minimum frequency optimized for low-latency throughput.

The third "battery life" option is really superfluous except in marginal situations where a laptop is under high constant load on battery power, such as via a runaway process. This is really just compensation for a poorly tuned system or misbehaving, buggy or poorly-optimized software.

General system tuning: Consider latency (especially audio latency!) and user psychology as important factors alongside throughput and energy efficiency! This general philosophy is very important for quality user experience and to maintain user trust in their computing systems.

https://access.redhat.com/sites/default/files/attachments/201501-perf-brief-low-latency-tuning-rhel7-v1.1.pdf (see in particular p. 8 about performance profiles: "Because tuning for throughput often at odds with tuning for latency, profiles have been split along those boundaries as well providing a “balanced” profile").
--
==Other Resources==

===Pop!_OS Resources===


Mattermost Pro Audio Channel

Telegram: Linux Audio

Pop! Planet


(included in the above post)


===Generic Resources===

http://www.jackaudio.org/ -- good general information about pro audio setup on Linux-based operating systems

linuxaudio.org


ardour user manual: http://manual.ardour.org/

specifically http://manual.ardour.org/recording/monitoring/latency-considerations/
 

Members online

No members online now.

Latest projects

Forum statistics

Threads
486
Messages
2,271
Members
414
Latest member
tLisboa