Migrating Fedora from BIOS to UEFI

Let me tell you a story.

This is not a sad story, but a geeky one.

A story about a developer that was told it’s impossible to migrate his Fedora OS from BIOS to UEFI, and against all odds, succeeded.

A few months ago I started working at a new place and got a shiny Dell XPS 9560.

The spec was amazing: Top of the line CPU, GPU, 4k screen and even 32gb of RAM!

But the issues.. oh… the issues. Thank god most of them are solvable by a simple firmare upgrade. The rest are GPU issues which led me to disable the embedded NVIDIA GPU (which I don’t need anyway).

Ok, so how do I upgrade the firmware? fwupd comes to the rescue:

fwupd is an open source daemon for managing the installation of firmware updates on Linux-based systems, developed by GNOME maintainer Richard Hughes…” - Wikipedia.

I was a few keystrokes away from getting all my issues solved!
Dell put in a lot of effort to make sure fwupd works great with their products, So I wasn’t suprised that my laptop is supported.

$ fwupdmgr refresh
$ fwupdmgr update
No devices can be updated: Nothing to do
$ fwupdate --supported
Firmware updates are not supported on this machine.

What?! but why?! fwupdmgr recognizes my devices:

$ fwupdmgr get-devices
Intel AMT (unprovisioned)
...
XPS 15 9560 System Firmware
...
Integrated Webcam HD
...
GP107M [GeForce GTX 1050 Mobile]
...

So what’s wrong? I’m connected to AC, I’m running as root, I got UEFI Capsule Updates turned on.

Oh wait. I’m not using UEFI. No problem! let’s migrate!

https://docs.fedoraproject.org/f26/install-guide/install/Booting_the_Installation.html

My first thought: “Oh shit. I’m f\cked”. My second thought: “that doesn’t make any sense!*.

Game Plan

All I need is a simple grub-mkconfig while booted in UEFI mode, but how?

  1. Convert my paritition table to GUID Partition Table
  2. Free up some space for an EFI Partition /boot/efi paritition
  3. Update GRUB to use UEFI

Before we continue, I want to share we you my own partition table:

Disk /dev/sda: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Device Start End Sectors Size Type
/dev/sda1 2048 1953791 1951744 953M Linux filesystem
/dev/sda2 1953792 60549119 58595328 28G Linux swap
/dev/sda3 60549120 493574143 433025024 206.5G Linux filesystem
/dev/sda4 493574144 1669111807 1175537664 560.6G Linux filesystem
/dev/sda5 1669111808 2000203775 331091968 157.9G Linux filesystem

I have two OS’s installed. Arch & Fedora -

  • Arch: /boot is mounted at /dev/sda1 and / is mounted at /dev/sda3.
  • Fedora: / is mounted at /dev/sda4.

Both use /dev/sda2 for swap, and /dev/sda5 has some other data.
I don’t need Arch anymore, and would like to migrate Fedora to UEFI.

LiveUSB

I know that most of the changes I had to do couldn’t be done on mounted volumes, ,so I had to use a LiveCD. But nobody uses LiveCD’s nowadays - LiveUSB is the word on the streets.

I had two options. Either Download a LiveCD and burn it, or use Fedora Media Writer.

Then, change my BIOS configuration to boot up in UEFI mode and boot it up.

Convert parition table to GPT

I got the LiveUSB installed on a company thumb drive. Now I need to convert my paritition table from dos to GUID (GPT).

This step is rather simple. I Used gdisk:

# shouldn't require a password
$ su
$ gdisk /dev/your/device
# gdisk will now prompt that it wants to convert the partition table.
# press 'w' to save and you're done.

Free up space

I actually had another OS installed at the beginning of the partition table which I didn’t use anymore, so I just deleted it and recreated a new one from the LiveCD.

If you don’t have one, install GParted and use it to free up ~10gb at beginning of the partition table.

Why 10GB? well, instead of trying to figure out how to install UEFI correctly, I decided to install another Fedora instance and delete it once I’m done. That way I know for sure everything will work correctly.

Ok, so now - install Fedora. It should prompt you to create an EFI partition. Create a 1GB partition at the beginning, and use the rest for the new Fedora installation.

Update GRUB

Installed? Yay. Now reboot. Don’t worry, you won’t see your “old” Fedora installation on boot. Get it your new shiny installation, but don’t get too attached, we’ll destroy it in a few minutes!

Recap

I’ve got a new GPT partition table with an EFI partition at the beginning:

Disk /dev/sda: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Device Start End Sectors Size Type
/dev/sda6 2048 1953791 1951744 953M EFI System
/dev/sda7 1953792 60549119 58595328 28G Linux swap
/dev/sda8 60549120 493574143 433025024 206.5G Linux filesystem
/dev/sda4 493574144 1669111807 1175537664 560.6G Linux filesystem
/dev/sda5 1669111808 2000203775 331091968 157.9G Linux filesystem

Again, I have two OS’s installed. Fedora’ & Fedora’’ -

  • Fedora’, the temporary one, has /boot/efi mounted at /dev/sda6 and / mounted at /dev/sda8.
  • Fedora’’, the “old” one, has / mounted at /dev/sda4.

Both use /dev/sda7 for swap.

chroot

I need Fedora'' to mount /dev/sda6 (/boot/efi) on boot, and configured to use UEFI. chroot to the rescue!

For those of you that have never heard of change root, Wikipedia provides a good explanation:

Chroot is an operation that changes the apparent root directory for the current running process and their children.

A program that is run in such a modified environment cannot access files and commands outside that environmental directory tree. This modified environment is called a chroot jail.

So back to where we were… Let’s chroot and get this over with.

# just login as root
$ sudo su
# mounting everything
$ mount /dev/sda4 /mnt/fedora
$ mount /dev/sda6 /mnt/fedora/boot/efi
$ mount -t proc proc /mnt/fedora/proc/
$ mount --rbind /sys /mnt/fedora/sys/
$ mount --rbind /dev /mnt/fedora/dev/
$ mount --rbind /var /mnt/fedora/var/
# copying over the efi mount point
# you might want to comment-out any /boot mounts you might have
$ grep "/boot/efi" /etc/fstab >> /mnt/fedora/etc/fstab
# chroot into your system
$ chroot /mnt/fedora /bin/bash

Awesome. I’m in Fedora''. Now I need to follow Fedora’s Updating GRUB 2 configuration on UEFI systems.

TL;DR:

$ sudo dnf reinstall grub2-efi grub2-efi-modules shim
$ sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

It worked! grub2-mkconfig told me it found Fedora' and Fedora''!

Cleaning Up

I’m basically done! A few minor steps remain -

First, Restart and boot into Fedora''.

Second, Open GParted, remove /dev/sda6 and probably re-arrange other partitions.

Third, re-create my grub config so Fedora' would disappear (like we did before):

$ sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

Checking that it all works

I know it sounds stupid, because the OS already booted, but why not?

$ ls /sys/firmware/efi/efivars | wc -l
81
$ efibootmgr
BootCurrent: 0007
Timeout: 0 seconds
BootOrder: 0007
Boot0000* Windows Boot Manager
Boot0006* Linux-Firmware-Updater \fwupx64.efi
Boot0007* Fedora

See? that wasn’t too hard!

Upgrading Firmware

After I did all that, I reran fwupd:

$ fwupdate --supported
Firmware updates are supported on this machine.

Yay!

$ fwupdmgr refresh
$ fwupdmgr update
...
$ reboot

Done. By the way, ALL the issues I previously had were gone after upgrading!

The Guts n’ Glory of Database Internals

A year ago Oren Eini (a.k.a @ayende) wrote a series dubbed “The Guts n’ Glory of Database Internals”.

Instead of just explaining how databases work, He incrementally builds a database from scratch. He goes over most database essentials, so once done you’ll be able to understand how databases actually work. If you haven’t taken any database courses at Uni, this is a must in my opinion.

The series is built around a “book keeping’ system. The problem? We need to keep track of users and how often they log into the system

He begins at persisting the data to a simple csv file, and then raises issues with this solution. The next part in the series addresses those issues and raises new ones: from selection time, concurrency, durability, logging and more.

Each part builds upon the previous and most parts take around 5 minutes to read.
The series is made up of 20 parts and takes ~ 1.5hrs to read from start to finish. Not so bad, right?

By the way, @ayende writes really well, so it’s also fun to read:

  1. Persisting information | 6 minutes
  2. Searching information and file format | 4 minutes
  3. The LSM option | 3 minutes
  4. B+ Tree | 11 minutes
  5. Seeing the forest for the trees | 4 minutes
  6. Managing Records | 7 minutes
  7. Managing Concurrency | 6 minutes
  8. Understanding durability with hard disks | 6 minutes
  9. Durability in the real world | 4 minutess
  10. Getting durable, faster | 3 minutes
  11. Writing to a data file | 3 minutes
  12. The enemy of thy database is… | 4 minutes
  13. The communication protocol | 4 minutes
  14. Backup, restore and the environment… | 4 minutes
  15. The curse of old age… | 5 minutes
  16. What the disk can do for you | 2 minutes
  17. What goes inside the transaction journal | 8 minutes
  18. Log shipping and point in time recovery | 3 minutes
  19. Merging transactions | 4 minutes
  20. Early lock release | 2 minutes

My take on Modern C++

C++ is a big language that has evolved tremendously since it’s inception way back in the 1980’s.

Throughout the years, many million lines of code have been written in the language and a big portion of that code is using legacy features that aren’t considered good practice anymore.

Replacing C++?

There were many attempts to replace the lanaguage. All of them failed as far as I know.

Some attempts were made to subset the language in order to get rid of code & language dept, which hurt speed and portability.

The most recent hype is around rust, which is a blazing fast, memory safe systems programming language. I see a promising future for rust, and I’m actually learning it myself. But like Bjarne said in his talk Writing Good C++14, it would take ~10 years for a good language to make it to the mainstream.

C++ is already here. We need to find a way for people to write good C++ now.

Subset of Superset

Simply sub-setting the language won’t work. This is backed by previous, failed, attempts. In order to maintain the speed of C++ we need the low-level / tricky / close-to-the-hardware / error-prone / expert-only features. Those feature are the building blocks of for higher-level facilitiese & libraries.

Bjarne talked about the subject at CPPCon a few years back:

Bjarne said we first need to create a superset of the langauge, then subset it in order to get rid of the crud. In order to do so, we need supporting facilities to make the transition: from guidelines on how to write modern C++, to libraries that enpasulate the usage of messy & dangerous things so most programmers won’t need to use them.

What is Modern C++

What is modern C++? Put simply, C++ that is based on C++1x and uses modern best practices.

To really grasp the essence of Modern C++, read the Core Guidelines. But nobody does that right?

Talks & Books

I really liked Bjarne’s Writing Good C++14, Neil MacIntosh’s The Guideline Support Library: One Year Later, Herb Sutter’s Writing Good C++ by Default & Modern C++: What You Need to Know.

I’ve also read parts of Effective Modern C++ by Scott Meyes and found it useful.

C++ Seasoning

I find Sean Parent‘s C++ Seasoning talk so good that I think you have to see it. I wrote about it in my previous post: C++ algorithm Series #0.

The talk takes a look at many of the new features in C++ and a couple of old features you may not have known about. With the goal of correctness in mind, Sean shows how to utilize these features to create simple, clear, and beautiful code.

TL;DR: No Raw Loops, No Raw Syntonization Primitives, No Raw Pointers.

Sean also gave another talk on the subject at Amazon’s A9 Programming Converstaions course.


Writing Modern C++

These are my do’s and don’ts regarding modern c++. There are also other things I do in order to make sure my project’s are written well:

  • Use CMake to build your project
  • Use memcheck to detect memory leaks
  • Run fuzzers to validate input
  • and more …

[!] Are you using a package manager? please let me know.

Follow Well-Known Guidelines

First, follow C++ Core Guidlines. You don’t need to actually read it, there are tools like Clang-Tidy that have the core guidelines pre-baked. Once you get a warning please go ahead and read the whole guideline. It’s important to understand Why the guideline exists.

Second, consider following well-known coding conventions and guidelines. On many occasions you can find tooling that help you follow guidelines created by big projects / corporations.

For instance, Clang-Format has pre-baked support for LLVM, Google, Chromium, Mozilla & WebKit. Clang-Tidy has pre-baked checks that follow Boost, Google, LLVM and more.

Use standard / popular libraries as much as possible. I use:

I try to use the [Standard Library as much as possible.

Compiler Flags

Turn on warnings, and preferebly warnings as errors. I usually turn on Wall and Werror. They are anoyying, but a neccessery evil IMO.

RAII

A few days ago I watched a talk called “Modernizing Legacy C++ Code” where they suggested to use RAII everywhere:

I’m not suprised, I’m a huge fan of RAII. Not only it makes code cleaner, thus reducing bugs and memory leaks, it also has an extremely low performance impact (compared to golang’s defer, which is used for the same purpose)

If Your’e interfacing C code, consider creating a scope guard. I use my home-baked defer-clone for that purpose.

Const-Qualify Everything

On “Modernizing Legacy C++ Code” they also talked about using const everywhere. At first it sounded weird, but it actually made a lot of sense once they showed a few examples:

This is a rolling release. That is, I’ll keep updating this post with new insights.

C++ <algorithm> Series #0

The more I learn about an eco-system, the more I understand how little I know about it.
I’m a master of nothing. There’s always something new to know, and the list only grows.

This time around it’s C++ that haunts me.

I recently “binge-watched” a few CPPCon talks:

They were awesome, but one talk really got my attention - C++ Seasoning by Sean Parent.

I have this “I don’t know anything about programming” feeling every time I watch a Rich Hickey talk.
Sean’s C++ Seasoning (then Programming Converstaions #1 & #2) made me feel the same.
His talk reminded me, again, that I have a lot to learn.

One of the key points of his talk is that developers need to be familiar with the algorithm library,
and be able to extend it. During his talk, He magically transformed a few-dozen-lines of complex code into two, using <algorithm>.

I’m used to seeing python code refactoring that turn huge pieces of code into a few, which are easy to read and reason about. But in C++? wow.

Anyway, one of my takeaways is that I don’t use <algorithm> enough. It’s time to change that.

The <algorithm> series

I’ll go through every algorithm in [ C++’s stl, Adobe’s asl, Google’s abseil & Boost ] and will provide examples for each. I might even throw in some Introduction to Algorithms references.

This is a big project which will take a very (very) long time to finish, but it’s worth it.

Stay tuned.

Everything that is wrong with sudo, and how I'm planning to fix it


suex

suex

The last few weeks have been a bit crazy.

I found myself re-implementing gosu because it lacked features I needed: sick of sudoers NOPASSWD?.

Then, Even when I was done, I felt something was missing. runas, the tool I wrote, was half-way into becoming a sudo replacement and it bugged me I stopped mid way.

I looked around the web and found an amazing project called doas that is basically runas on steroids.

What is doas

doas is a utility that is aimed to replace sudo for most ordinary use cases.
Ted Unagst’s, an OpenBSD developer, explained why He originally wrote it in his blog post: doas - dedicated openbsd application subexecutor.

The gist is that sudo is hard to configure and does a lot more then the standard user needs. doas was created in order to replace sudo for regular folks like me and you.
Moreover, sudo lacks ‘blacklist’ behaviour which is extremely useful at times.

doas is relatively easy to configure, and an absolute joy compared to sudo. It’s also powerful enough for most daily use-cases.
IMO, the permit / deny concept of doas is so powerful that it’s enough to make the switch.

Implementing doas from scratch

The problem was that doas was written for OpenBSD.
I’m not running OpenBSD, so I looked around for a port.

All ports I found were half baked and poorly written.

Then I looked at the original source code, and decided I’m not going to port it.

Why? because it’s written in C, and I really don’t want to maintain C code.
Furthermore, the original code base lacked feature I introduced in runas which I really loved.

Instead, I decided to start this project. A complete re-implementation of doas.

This is my first attempt at writing a production quality, open source project from scratch.

I’m not there yet, but I’m determined on pushing this project into the main repositories of both ubuntu and fedora. More work has to be done in order to get there. for instance: Adding system tests & Getting the code audited.

Feel free to reach out if you want to contribute!

Project Goals

  • Secure. User’s shouldn’t be able to abuse the utility, and it should protect the user from making stupid mistakes.

  • Easy. The utility should be easy to audit, to maintain, to extend and to contribute to.

  • Friendly. Rule creation should be straight forward. Rule should be easy to understand and easy to debug.

  • Powerful. Rules should be short, concise and allow find-grained control.

  • Feature Parity. This project should have complete feature parity with the original utility.

To achieve these goals, the following design decisions were made:

  1. The whole project was implemented in modern C++.
  2. Explicit is better then implicit (for instance, rule commands must be absolute paths)
  3. Prefer using the standard library when possible - for the sake of security and maintainability.
  4. Commands are globs, which allows to use the same rule for many executables.
  5. Arguments are PCRE-compliant regular expressions, which allows to create fine-grained rules.

Getting started

You can find pre-compiled .deb and .rpm packages in the project’s GitHub Releases Page.

[!] Ubuntu PPA & Fedora Copr are coming soon.

You can also build from source. more information found at odedlaz/suex.

Changes compared to the original

Security checks

doas doesn’t check the owners & permissions of the binary and configuration file.
sudo checks those, but only warns the user.

This version ensures the binary and configuration file are owned by root:root.
It also ensures the binary has setuid, and that the configuration file has only read permissions.

Furthermore, only full paths of commands are allowed in the configuration file.
The idea is that privileged users (i.e: members of the wheel group) need to explicitly set the rule instead of depending on the running user’s path.

Edit mode

suex -E

suex allows any privileged user (i.e: members of the wheel group) to edit the configuration file safely.
Furthermore, if the configuration file is corrupted, privileged users can still access it and edit it.

The edit option is similar to visudo, it creates a copy of the configuration and updates the real configuration only when the copy is valid.

Non-privileged users are not allowed to edit the configuration.

Verbose mode

suex -V

suex allows to show logging information to privileged users. That information shows which rules are being loaded & how they are processed.

Non-privileged users are not allowed to turn on verbose mode.

Dump mode

suex -D

suex allows the user to dump the permissions it loaded to screen.
group permissions and command globs are expanded into individual rules as well.

privileged users see the permissions of all users instead of only their own.

Examples

Ted Unagst’s wrote a great blog post called doas mastery. Because the project has complete feature parity with the OpenBSD version, the mentioned post should be a good starting point.

Never the less, there are some powerful enhancments in this release that deserve special attention.

fine-grained package management

deny odedlaz as root cmd /usr/bin/dnf args (autoremove|update|upgrade).+
permit keepenv nopass odedlaz as root cmd /usr/bin/dnf args (autoremove|update|upgrade)$

The first rule denies odedlaz of running dnf as root with any arguments that start with autoremove, update & upgrade and have other arguments as well.

The second rule allows odedlaz to run dnf as root only with autoremove, update, upgrade and no other arguments.

These protect odedlaz from from accidentally running dnf autoremove -y or dnf upgrade -y, even if He’s a privileged user (a member of the wheel group).

On the other hand, it allows odedlaz to run these commands without a password (nopass) if they are executed without any trailing arguments.

rm -rf protection

deny odedlaz as root cmd /bin/rm args .*\s+/$

The above rule protects odedlaz from accidentally running rm -rf / and the like.

one rule, multiple executables

permit keepenv nopass odedlaz as root cmd /home/odedlaz/Development/suex/tools/* args .*

The above rule allows odedlaz to run any executable found at /home/odedlaz/Development/suex/tools with any arguments, as root without requiring a password.

Implementing Go's defer keyword in C++

Go has a neat keyword called defer that is used to ensure that a function call is performed later in a program’s execution, usually for purposes of cleanup.

Suppose we wanted to create a file, write to it, and then close when we’re done:

package main
import "fmt"
import "os"
func createFile(p string) *os.File {
fmt.Println("creating")
f, err := os.Create(p)
if err != nil {
panic(err)
}
return f
}
func writeFile(f *os.File) {
fmt.Println("writing")
fmt.Fprintln(f, "data")
}
func closeFile(f *os.File) {
fmt.Println("closing")
f.Close()
}
func main() {
f := createFile("/tmp/defer.txt")
defer closeFile(f)
writeFile(f)
}

Immediately after getting a file object with createFile, we defer the closing of that file with closeFile. This will be executed at the end of the enclosing function (main), after writeFile has finished.

Running the program confirms that the file is closed after being written:

$ go run defer.go
creating
writing
closing

[!] The above was taken from Go by Example

Implementing defer in C++

C++ has a neat feature called Resource acquisition is initialization, a.k.a RAII. There are a lot of resources online that explain what is RAII and how it works, Tom Dalling’s for example.

One of the top uses for RAII are scope guards, which are usually used to perform cleanup. The concept is explained thoroughly in Generic: Change the Way You Write Exception-Safe Code — Forever.

I didn’t like the implementation they suggested, and instead went searching for a better one. I found what I was looking for on stackoverflow:

class ScopeGuard {
public:
template<class Callable>
ScopeGuard(Callable &&fn) : fn_(std::forward<Callable>(fn)) {}
ScopeGuard(ScopeGuard &&other) : fn_(std::move(other.fn_)) {
other.fn_ = nullptr;
}
~ScopeGuard() {
// must not throw
if (fn_) fn_();
}
ScopeGuard(const ScopeGuard &) = delete;
void operator=(const ScopeGuard &) = delete;
private:
std::function<void()> fn_;
};

which can be used as follows:

std::cout << "creating" << std::endl;
std::ofstream f("/path/to/file");
ScopeGuard close_file = [&]() { std::cout << "closing" << std::endl;
f.close(); };
std::cout << "writing" << std::endl;
f << "hello defer" << std::endl;

The above execution flow would be: creating -> writing -> closing.
Nice, right? but it also forces us to name each ScopeGuard, which is annoying.

Thank god we have macros! (never say that. same for goto) -

#define CONCAT_(a, b) a ## b
#define CONCAT(a, b) CONCAT_(a,b)
#define DEFER(fn) ScopeGuard CONCAT(__defer__, __LINE__) = fn

and now we have a defer like behaviour in C++:

std::cout << "creating" << std::endl;
std::ofstream f("/path/to/file");
DEFER ( [&]() { std::cout << "closing" << std::endl;
f.close(); } );
std::cout << "writing" << std::endl;
f << "hello defer" << std::endl;

But why do we need the excess [&]() { ... ; } part? and what is it anyway?
[&] tells the compiler to pass all locals by reference, and () is used to indicate function args.
We want this behaviour for all DEFER calls, so let’s put it in the macro:

#define DEFER(fn) ScopeGuard CONCAT(__defer__, __LINE__) = [&] ( ) { fn ; }

And now there’s no need for boilerplate code:

std::ofstream f("/path/to/file");
DEFER ( f.close() );
f << "hello defer" << std::endl;

The neat part is that we can call DEFER multiple times without having to name variables,
because each DEFER call creates a ScopeGuard with a random name in order to avoiding colissions;

std::ofstream f1("/path/to/file1");
DEFER ( f1.close() );
f1 << "hello defer" << std::endl;
std::ofstream f2("/path/to/file2");
DEFER ( f2.close() );
f2 << "hello defer" << std::endl;

It also works with multiline functions, just like golang’s defer keyword:

std::ofstream f("/path/to/file1");
DEFER ( { std::cout << "closing file" << std::endl;
f.close(); } );
f << "hello defer" << std::endl;
// curly-braces and trailing comma's are not mandatory.
// the previous statement could've been written like this too:
DEFER ( std::cout << "closing file" << std::endl;
f.close() );

sick of sudoers NOPASSWD?

TL;DR: I wrote a tool that allows to run a binary as a different owner/group.

You can download it from odedlaz/runas.

Feel free to request features, send pull requests & open issues!

Motivation

You must be thinking that I’m re-inventing the wheel. Well, I’m not. Let look at the following scenario:

  • There’s a binary that you want to run in a non-interactive session.
  • You want the binary to run with different permissions then the current user.
  • You don’t want the user to be able to run any binary with any permissions,
    only the one you want, with the requested user / group.
  • You don’t want a child process to get created, because you want to run the binary
    as part of a filter without any other processes getting in the way.

A good example would be to debug an elevated app, while running your editor regularly. for example -> running gdb and debugging a binary as root.

You probably don’t want to turn on Set owner User ID because that’s a major security hole.
You also can’t use su / sudo as part of your editor / IDE because they execute the target process as child, which causes many issues.

sudo is also somewhat complex to configure, and honestly, I prefer to avoid using it alltogether.

Solution

A tool that is easy to configure & runs the target binary with requested owner:group.
runas is that tool. It does one thing, and (hopefully) does it well.

runas doesn’t have any complicated flags or knobs.

$ runas
Usage: bin/runas user-spec command [args]
version: 0.1.2, license: MIT

It just lets you run binaries:

$ runas root:root bash -c 'whoami && id'
You can't execute '/bin/bash -c whoami && id' as 'root:root': Operation not permitted

But you need need the proper permissions to do so.

$ echo "odedlaz -> root :: /usr/bin/bash -c 'whoami && id'" | sudo tee --append /etc/runas.conf
[sudo] password for odedlaz:
odedlaz -> root :: /usr/bin/bash

Notice I added /usr/bin/bash which is linked to /bin/bash.
runas follows links to their source, to make sure the right binary is called.
It also mimics the way shells parse commands so the configuration and command should be identical.

For instance, 'whoami && id' is concatenated by the shell into one argument.
runas makes sure you don’t have to think about the way things get parsed.

Anyway, now the command works:

$ runas root:root bash -c 'whoami && id'
root
uid=0(root) gid=0(root) groups=0(root)

“Advanced” Examples

What if you want to allow the user to use any argument for a given binary?
The previous configuration only allows us to run bash -c 'whoami && id.

$ runas root:root bash -c id
You can't execute '/bin/bash -c id' as 'root:root': Operation not permitted

You don’t need to think to much. The configuration is really easy:

$ echo "odedlaz -> root :: /usr/bin/bash" | sudo tee --append /etc/runas.conf
[sudo] password for odedlaz:
odedlaz -> root :: /usr/bin/bash

And now any argument passed to bash will work, including the previous one:

$ runas root:root bash -c id
uid=0(root) gid=0(root) groups=0(root)

You can also lock the user to run bash -c commands exclusively.:

$ echo 'odedlaz -> root :: /usr/bin/bash -c .*' | sudo tee --append /etc/runas.conf
[sudo] password for odedlaz:
odedlaz -> root :: /usr/bin/bash -c .*

And now the user can run any argument that begins with -c.
If we’d remove the previous command, we won’t be able to run bash without -c:

$ runas root:root bash -c id
uid=0(root) gid=0(root) groups=0(root)
$ runas root:root bash
You can't execute '/bin/bash' as 'root:root': Operation not permitted

runas is greedy. It’ll try to find a configuration that allows to run the given command, and will stop once it finds one.

Group permissions

What if you want to allow specific group members to run a command? Again, you don’t need to think to much:

$ echo "%docker -> root :: /bin/systemctl restart docker" | sudo tee --append /etc/runas.conf
[sudo] password for odedlaz:
%docker -> root :: /bin/systemctl restart docker

And now any member of the docker group can restart the docker daemon!

Fine-grained permissions

runas uses c++ 14, which comes with a built-in ECMAScript flavored regex library.
Using regular expressions can be really helpful when you want to have a lot of control over given permissions, which is still easy to understand..

A good example would be to allow the user to run only “readonly” operations on systemd units:

$ echo "odedlaz -> root :: /bin/systemctl (start|stop|restart|cat) .*" | sudo tee --append /etc/runas.conf
[sudo] password for odedlaz:
odedlaz -> root :: /bin/systemctl (start|stop|restart|cat) .*

Now the user doesn’t need root permissions to perform start, stop, restat and cat operations:

$ runas root systemctl cat docker
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service
Wants=network-online.target
Requires=docker.socket
...

Why reinvent gosu?

gosu is a tool that was invented to solve TTY & signaling issues, mainly for containers.
As I said before, sudo and su run the target process as a child, which means all signals are passed to them, and sometimes aren’t forwarded propely.
gosu solves that issue, but doesn’t provide a permissions mechanism which makes it practically impossible to use on regular systems that need an extra layer security.

gosu is also written in Go, which is notoriously known for creating really big binaries:

  • 1.23MB for the amd64 release
  • 1.1MB for the i386 release

runas‘s binary takes only 200KB unpacked, and ~60KB when packed with UPX.

GNOME Shell Modes

A few days ago I installed PulseSecure’s client to gain access to the corporate VPN.
For reasons comepletely unknown to me, these guys are stuck in the past, providing only a 32bit client for linux. WTF.

Anyway, I got everything working with Open Connect so I could safely remove all 32bit dependencies I added to my system.

How? I ran dpkg --remove-architecture i386 && apt-get purge ".*:i386". BIG. BIG MISTAKE.

TL;DR: many parts of the system broke, but I got everything up and running after an hour or so.

There were only two things that stayed broken:

  1. The date/time clock, which is usually centered at the top bar, moved to the right.
  2. I had a window list bar stuck at the bottom of the screen.

I couldn’t find solutions to either. It seemed that everyone online were trying to move the clock to the right, not to the middle.
Moreover, every time I tried to disable the Window List extension, it came back.

While trying to remove the extension, I found a small configuration file at /usr/share/gnome-shell/modes called classic.json, with the following content:

{
"parentMode": "user",
"stylesheetName": "gnome-classic.css",
"enabledExtensions": ["[email protected]",
"panel": { "left": ["activities", "appMenu"],
"center": [],
"right": ["a11y", "dateMenu", "keyboard", "aggregateMenu"]
}
}

Neat! All I had to do is move the “dateMenu” item to center, and remove “window-list”.

P.S: also answered on askubuntu.com

Algorithms to Live By

A few weeks ago I started reading Algorithms to Live By: The Computer Science of Human Decisions and been fascinated by it ever since:

What should we do, or leave undone, in a day or a lifetime? How much messiness should we accept? What balance of the new and familiar is the most fulfilling? These may seem like uniquely human quandaries, but they are not. Computers, like us, confront limited space and time, so computer scientists have been grappling with similar problems for decades. And the solutions they’ve found have much to teach us.

In a dazzlingly interdisciplinary work, Brian Christian and Tom Griffiths show how algorithms developed for computers also untangle very human questions. They explain how to have better hunches and when to leave things to chance, how to deal with overwhelming choices and how best to connect with others. From finding a spouse to finding a parking spot, from organizing one’s inbox to peering into the future, Algorithms to Live By transforms the wisdom of computer science into strategies for human living.

I’ll try and give you a little taste of the book - which I’m still reading (more precisely, listening), so you’ll know what you’re getting into.

Read More

Making Hexo Blazing Fast

A week ago I migrated my blog from Ghost to Hexo to gain better performance and save money.

Hexo is said to be “Blazing Fast”, but while I did “feel” that my Hexo based site was snappier than its predecessor, it was far from “Blazing Fast”.

Performance is extremely important. There are a lot of articles on the subject, most of which point out that website performance & uptime are key to user satisfaction. WebpageFX wrote a nice summary of the subject - Why Website Speed is Important.

I’m not a web developer, and have almost zero knowledge in website optimizations. Nonetheless, I’ve optimized more then a few apps in my career and know how to approach such problems.

All I need is to figure out the tooling to find the bottlenecks and fix them to gain good enough performance. That is, I’m not looking into optimizing every single piece of the website, only making it fast enough so it’ll feel snappy.

This blog post explains the steps I took in order to dramatically decrease the average page size to less then 350k.

Read More