mirror of
https://github.com/cirosantilli/linux-kernel-module-cheat.git
synced 2026-01-23 02:05:57 +01:00
download-dependencies: merge into ./build --download-dependencies
Reuses the module system dependencies present there. run: make --dry-run work even when there is no out directory yet docker: make the wrapping more intuitive
This commit is contained in:
12
.travis.yml
12
.travis.yml
@@ -2,20 +2,10 @@ language: cpp
|
||||
|
||||
sudo: required
|
||||
|
||||
install: |
|
||||
cd "$TRAVIS_BUILD_DIR"
|
||||
bash -x ./download-dependencies --travis
|
||||
|
||||
script: |
|
||||
cd "$TRAVIS_BUILD_DIR"
|
||||
# --nproc: I'm unable to install nproc on Travis.
|
||||
# TODO why? Is part of coreutils in Ubuntu 16.04:
|
||||
# http://manpages.ubuntu.com/manpages/trusty/man1/nproc.1.html
|
||||
# which ./download-dependencies is installing.
|
||||
#
|
||||
# awk: without it, too much stdout (4Mb max)
|
||||
# If we ignore stdout: Travis kills job because it spent
|
||||
# too long without any new stdout.
|
||||
bash -x ./build-qemu --nproc 16 |& awk 'NR % 1000 == 0'
|
||||
bash -x ./build-buildroot --nproc 16 |& awk 'NR % 1000 == 0'
|
||||
bash -x ./build --download-dependencies --travis |& awk 'NR % 1000 == 0'
|
||||
bash -x ./run --kernel-cli 'init=/poweroff.out'
|
||||
|
||||
130
README.adoc
130
README.adoc
@@ -32,7 +32,7 @@ Reserve 12Gb of disk and run:
|
||||
....
|
||||
git clone https://github.com/cirosantilli/linux-kernel-module-cheat
|
||||
cd linux-kernel-module-cheat
|
||||
./download-dependencies && ./build
|
||||
./build --download-dependencies
|
||||
./run
|
||||
....
|
||||
|
||||
@@ -331,7 +331,7 @@ For the most part, if you just add the `--gem5` option or `*-gem5` suffix to all
|
||||
If you haven't built Buildroot yet for <<qemu-buildroot-setup>>, you can build from the beginning with:
|
||||
|
||||
....
|
||||
./download-dependencies --gem5 && ./build gem5-buildroot
|
||||
./build --download-dependencies gem5-buildroot
|
||||
./run --gem5
|
||||
....
|
||||
|
||||
@@ -387,58 +387,55 @@ Good next steps are:
|
||||
|
||||
This repository has been tested inside clean link:https://en.wikipedia.org/wiki/Docker_(software)[Docker] containers.
|
||||
|
||||
This is a good option if you are on a Linux host, but the native setup failed due to your weird host distribution, and you have better things to do with your life than to debug it.
|
||||
This is a good option if you are on a Linux host, but the native setup failed due to your weird host distribution, and you have better things to do with your life than to debug it. See also: <<supported-hosts>>.
|
||||
|
||||
Buildroot is the most complex thing we build, and therefore the most likely to break, so running inside Docker is specially relevant to run:
|
||||
|
||||
* <<qemu-buildroot-setup>>
|
||||
* <<gem5-buildroot-setup>>
|
||||
|
||||
Before anything, you must get rid of any host build files on `out/` if you have any. A simple way to do this it to:
|
||||
For example, to do a <<qemu-buildroot-setup>> inside Docker, run:
|
||||
|
||||
....
|
||||
mv out out.host
|
||||
sudo apt-get install docker && \
|
||||
./run-docker create && \
|
||||
./run-docker start && \
|
||||
./run-docker sh ./build --download-dependencies && \
|
||||
./run-docker sh
|
||||
....
|
||||
|
||||
A cleaner option is to make a separate clone of this repository just for Docker, although this will require another submodule update.
|
||||
|
||||
Then install Docker, e.g. on Ubuntu:
|
||||
You are now left inside a shell in the Docker! From there, just run as usual:
|
||||
|
||||
....
|
||||
sudo apt-get install docker
|
||||
./run
|
||||
....
|
||||
|
||||
The very first time you launch Docker, create the container with:
|
||||
Command breakdown:
|
||||
|
||||
* `./run-docker create`: create the container.
|
||||
+
|
||||
Needed only the very first time you use Docker, or if you run `./run-docker DESTROY` to restart for scratch, or save some disk space.
|
||||
+
|
||||
The container name is `lkmc` and shows up in the list of all containers:
|
||||
+
|
||||
....
|
||||
./run-docker setup
|
||||
docker ps -a
|
||||
....
|
||||
* `./run-docker start`: start the container as daemon on the background.
|
||||
+
|
||||
Needed only after reboot, or if you call `stop` to save CPU or memory resources.
|
||||
+
|
||||
The container can now be seen on the list of running containers:
|
||||
+
|
||||
....
|
||||
docker ps
|
||||
....
|
||||
* `./run-docker sh`: open a shell on a previously started Docker daemon.
|
||||
+
|
||||
Quit the shell as usual with `Ctrl`
|
||||
+
|
||||
Can be called multiple times to open multiple shells.
|
||||
|
||||
You are now left inside a shell in the Docker guest.
|
||||
|
||||
From there, run the exact same commands that you would on a native install.
|
||||
|
||||
The host git top level directory is mounted inside the guest, which means for example that you can use your host's GUI text editor directly on the files.
|
||||
|
||||
Just don't forget that if you nuke that directory on the guest, then it gets nuked on the host as well!
|
||||
|
||||
Trying to run the output from Docker from host won't however, I think the main reason is that the absolute paths inside Docker are different than the host ones, but even if we fix that there will likely be other problems.
|
||||
The host git top level directory is mounted inside the guest with a link:https://stackoverflow.com/questions/23439126/how-to-mount-a-host-directory-in-a-docker-container[Docker volume], which means for example that you can use your host's GUI text editor directly on the files. Just don't forget that if you nuke that directory on the guest, then it gets nuked on the host as well!
|
||||
|
||||
TODO make files created inside Docker be owned by the current user in host instead of `root`: https://stackoverflow.com/questions/23544282/what-is-the-best-way-to-manage-permissions-for-docker-shared-volumes
|
||||
|
||||
Quit and stop the container:
|
||||
|
||||
....
|
||||
Ctrl-D
|
||||
....
|
||||
|
||||
Restart the container:
|
||||
|
||||
....
|
||||
./run-docker
|
||||
....
|
||||
|
||||
In order to use functionality such as <<gdb>>, you need a second shell inside the container. You can either do that with:
|
||||
In order to use functionality such as <<gdb>> from inside Docker, you need a second shell inside the container. You can either do that from another shell with:
|
||||
|
||||
....
|
||||
./run-docker sh
|
||||
@@ -454,13 +451,7 @@ You can start a second shell and run a command in it at the same time with:
|
||||
|
||||
Docker stops if and only if you quit the initial shell, you can quit this one without consequences.
|
||||
|
||||
If you mistakenly run `./run-docker` twice, it opens two mirrored terminals. To quit one of them do link:https://stackoverflow.com/questions/19688314/how-do-you-attach-and-detach-from-dockers-process[]:
|
||||
|
||||
....
|
||||
Ctrl-P Ctrl-Q
|
||||
....
|
||||
|
||||
To use <<qemu-graphic-mode>> from Docker:
|
||||
To use <<qemu-graphic-mode>> from Docker, run:
|
||||
|
||||
....
|
||||
./run --graphic --vnc
|
||||
@@ -473,29 +464,20 @@ sudo apt-get install vinagre
|
||||
./vnc
|
||||
....
|
||||
|
||||
Destroy the docker container:
|
||||
When you do:
|
||||
|
||||
....
|
||||
./run-docker DELETE
|
||||
./run-docker DESTROY
|
||||
....
|
||||
|
||||
Since we mount the guest's working directory on the host git top-level, you will likely not lose data from doing this, just the `apt-get` installs.
|
||||
you don't really destroy the build, since we mount the guest's working directory on the host git top-level, so you basically just get rid of the `apt-get` installs.
|
||||
|
||||
To get back to a host build, don't forget to clean up `out/` again:
|
||||
To actually delete the Docker build, run:
|
||||
|
||||
....
|
||||
mv out out.docker
|
||||
mv out.host out
|
||||
# sudo rm -rf out.docker
|
||||
....
|
||||
|
||||
After this, to start using Docker again will you need another:
|
||||
|
||||
....
|
||||
./run-docker setup
|
||||
....
|
||||
|
||||
Tested on: a760cb1196161e913a94684e03cfeaebf71f0cdd
|
||||
|
||||
[[prebuilt]]
|
||||
=== Prebuilt Buildroot setup
|
||||
|
||||
@@ -735,8 +717,7 @@ Our C bare-metal compiler is built with link:https://github.com/crosstool-ng/cro
|
||||
QEMU:
|
||||
|
||||
....
|
||||
./download-dependencies --baremetal --qemu && \
|
||||
./build --arch arm qemu-baremetal
|
||||
./build --arch arm --download-dependencies qemu-baremetal
|
||||
./run --arch arm --baremetal interactive/prompt
|
||||
....
|
||||
|
||||
@@ -801,8 +782,7 @@ Absolute paths however are used as is and must point to the actual executable:
|
||||
To use gem5 instead of QEMU do:
|
||||
|
||||
....
|
||||
./download-dependencies --baremetal --gem5 && \
|
||||
./build gem5-baremetal
|
||||
./build --download-dependencies gem5-baremetal
|
||||
./run --arch arm --baremetal interactive/prompt --gem5
|
||||
....
|
||||
|
||||
@@ -8181,7 +8161,7 @@ less "$(./getvar -a "$arch" run_dir)/trace.txt"
|
||||
|
||||
This functionality relies on the following setup:
|
||||
|
||||
* `./download-dependencies --enable-trace-backends=simple`. This logs in a binary format to the trace file.
|
||||
* `./configure --enable-trace-backends=simple`. This logs in a binary format to the trace file.
|
||||
+
|
||||
It makes 3x execution faster than the default trace backend which logs human readable data to stdout.
|
||||
+
|
||||
@@ -8923,7 +8903,7 @@ There are two ways to run PARSEC with this repo:
|
||||
====== PARSEC benchmark without parsecmgmt
|
||||
|
||||
....
|
||||
./download-dependencies --gem5 --parsec-benchmark
|
||||
./build --arch arm --download-dependencies gem5-buildroot parsec-benchmark
|
||||
./build-buildroot --arch arm --config 'BR2_PACKAGE_PARSEC_BENCHMARK=y'
|
||||
./run --arch arm --gem5
|
||||
....
|
||||
@@ -10348,7 +10328,7 @@ CT_GDB_CROSS_SIM=y
|
||||
which by grepping crosstool-NG we can see does on GDB:
|
||||
|
||||
....
|
||||
./download-dependencies --enable-sim
|
||||
./configure --enable-sim
|
||||
....
|
||||
|
||||
Those are not set by default on `gdb-multiarch` in Ubuntu 16.04.
|
||||
@@ -10647,7 +10627,7 @@ Kernel panic - not syncing: Attempted to kill the idle task!
|
||||
|
||||
==== Benchmark builds
|
||||
|
||||
The build times are calculated after doing `./download-dependencies` and link:https://buildroot.org/downloads/manual/manual.html#_offline_builds[`make source`], which downloads the sources, and basically benchmarks the <<benchmark-internets,Internet>>.
|
||||
The build times are calculated after doing `./configure` and link:https://buildroot.org/downloads/manual/manual.html#_offline_builds[`make source`], which downloads the sources, and basically benchmarks the <<benchmark-internets,Internet>>.
|
||||
|
||||
Sample build time at 2c12b21b304178a81c9912817b782ead0286d282: 28 minutes, 15 with full ccache hits. Breakdown: 19% GCC, 13% Linux kernel, 7% uclibc, 6% host-python, 5% host-qemu, 5% host-gdb, 2% host-binutils
|
||||
|
||||
@@ -10801,7 +10781,15 @@ gem5:
|
||||
|
||||
We tend to test this repo the most on the latest Ubuntu and on the latest link:https://askubuntu.com/questions/16366/whats-the-difference-between-a-long-term-support-release-and-a-normal-release[Ubuntu LTS].
|
||||
|
||||
For other Linux distros, everything will likely also just work if you install the analogous required packages for your distro, just have a look at: link:download-dependencies[]. `./download-dependencies` ports to new systems are welcome and will be merged.
|
||||
For other Linux distros, everything will likely also just work if you install the analogous required packages for your distro, find them out with:
|
||||
|
||||
....
|
||||
./build --download-dependencies --dry-run
|
||||
....
|
||||
|
||||
which just prints what `build` would do quickly without doing anything.
|
||||
|
||||
Ports to new host systems are welcome and will be merged.
|
||||
|
||||
If something does not work however, <<docker>> should just work on any Linux distro.
|
||||
|
||||
@@ -10811,7 +10799,7 @@ Native Windows is unlikely feasible because Buildroot is a huge set of GNU Make
|
||||
|
||||
==== You must put some 'source' URIs in your sources.list
|
||||
|
||||
If `./download-dependencies` fails with:
|
||||
If `./build --download-dependencies` fails with:
|
||||
|
||||
....
|
||||
E: You must put some 'source' URIs in your sources.list
|
||||
@@ -10823,7 +10811,7 @@ see this: https://askubuntu.com/questions/496549/error-you-must-put-some-source-
|
||||
|
||||
It does not work if you just download the `.zip` with the sources for this repository from GitHub because we use link:.gitmodules[Git submodules], you must clone this repo.
|
||||
|
||||
`./download-dependencies` then fetches only the required submodules for you.
|
||||
`./build --download-dependencies` then fetches only the required submodules for you.
|
||||
|
||||
=== Run command after boot
|
||||
|
||||
@@ -11072,7 +11060,7 @@ git -C "$(./getvar linux_src_dir)" checkout -
|
||||
./run --linux-build-id v4.16
|
||||
....
|
||||
|
||||
The `git fetch --unshallow` is needed the first time because link:download-dependencies[] only does a shallow clone of the Linux kernel to save space and time, see also: https://stackoverflow.com/questions/6802145/how-to-convert-a-git-shallow-clone-to-a-full-clone
|
||||
The `git fetch --unshallow` is needed the first time because `./build --download-dependencies` only does a shallow clone of the Linux kernel to save space and time, see also: https://stackoverflow.com/questions/6802145/how-to-convert-a-git-shallow-clone-to-a-full-clone
|
||||
|
||||
The `--linux-build-id` option should be passed to all scripts that support it, much like `--arch` for the <<cpu-architecture>>, e.g. to step debug:
|
||||
|
||||
|
||||
220
build
220
build
@@ -2,6 +2,7 @@
|
||||
|
||||
import argparse
|
||||
import collections
|
||||
import re
|
||||
import os
|
||||
|
||||
import common
|
||||
@@ -19,15 +20,24 @@ class Component:
|
||||
def __init__(
|
||||
self,
|
||||
build_callback=None,
|
||||
dependencies=None,
|
||||
supported_archs=None,
|
||||
dependencies=None,
|
||||
apt_get_pkgs=None,
|
||||
apt_build_deps=None,
|
||||
submodules=None,
|
||||
submodules_shallow=None,
|
||||
python2_pkgs=None,
|
||||
python3_pkgs=None,
|
||||
):
|
||||
self.build_callback = build_callback
|
||||
self.supported_archs = supported_archs
|
||||
if dependencies is None:
|
||||
self.dependencies = []
|
||||
else:
|
||||
self.dependencies = dependencies
|
||||
self.dependencies = dependencies or set()
|
||||
self.apt_get_pkgs = apt_get_pkgs or set()
|
||||
self.apt_build_deps = apt_build_deps or set()
|
||||
self.submodules = submodules or set()
|
||||
self.submodules_shallow = submodules_shallow or set()
|
||||
self.python2_pkgs = python2_pkgs or set()
|
||||
self.python3_pkgs = python3_pkgs or set()
|
||||
def build(self, arch):
|
||||
if (
|
||||
(self.build_callback is not None) and
|
||||
@@ -44,6 +54,11 @@ def run_cmd(cmd, arch):
|
||||
cmd_abs.append(args.extra_args)
|
||||
common.run_cmd(cmd_abs, dry_run=args.dry_run)
|
||||
|
||||
buildroot_component = Component(
|
||||
lambda arch: run_cmd(['build-buildroot'], arch),
|
||||
submodules = {'buildroot'},
|
||||
)
|
||||
|
||||
name_to_component_map = {
|
||||
# Leaves without dependencies.
|
||||
'baremetal-qemu': Component(
|
||||
@@ -58,21 +73,47 @@ name_to_component_map = {
|
||||
lambda arch: run_cmd(['build-baremetal', '--gem5', '--machine', 'RealViewPBX'], arch),
|
||||
supported_archs=common.crosstool_ng_supported_archs,
|
||||
),
|
||||
'buildroot': Component(
|
||||
lambda arch: run_cmd(['build-buildroot'], arch),
|
||||
),
|
||||
'buildroot-gcc': Component(
|
||||
lambda arch: run_cmd(['build-buildroot'], arch),
|
||||
),
|
||||
'buildroot': buildroot_component,
|
||||
'buildroot-gcc': buildroot_component,
|
||||
'copy-overlay': Component(
|
||||
lambda arch: run_cmd(['copy-overlay'], arch),
|
||||
),
|
||||
'crosstool-ng': Component(
|
||||
lambda arch: run_cmd(['build-crosstool-ng'], arch),
|
||||
supported_archs=common.crosstool_ng_supported_archs,
|
||||
# http://crosstool-ng.github.io/docs/os-setup/
|
||||
apt_get_pkgs={
|
||||
'bison',
|
||||
'docbook2x',
|
||||
'flex',
|
||||
'gcc',
|
||||
'gperf',
|
||||
'help2man',
|
||||
'libncurses5-dev',
|
||||
'libtool-bin',
|
||||
'make',
|
||||
'python-dev',
|
||||
'texinfo',
|
||||
},
|
||||
submodules={'crosstool-ng'},
|
||||
),
|
||||
'gem5': Component(
|
||||
lambda arch: run_cmd(['build-gem5'], arch),
|
||||
# TODO test it out on Docker and answer that question properly:
|
||||
# https://askubuntu.com/questions/350475/how-can-i-install-gem5
|
||||
apt_get_pkgs={
|
||||
'diod',
|
||||
'libgoogle-perftools-dev',
|
||||
'protobuf-compiler',
|
||||
'python-dev',
|
||||
'python-pip',
|
||||
'scons',
|
||||
},
|
||||
python2_pkgs={
|
||||
# Generate graphs of config.ini under m5out.
|
||||
'pydot',
|
||||
},
|
||||
submodules={'gem5'},
|
||||
),
|
||||
'gem5-debug': Component(
|
||||
lambda arch: run_cmd(['build-gem5', '--gem5-build-type', 'debug'], arch),
|
||||
@@ -82,18 +123,29 @@ name_to_component_map = {
|
||||
),
|
||||
'linux': Component(
|
||||
lambda arch: run_cmd(['build-linux'], arch),
|
||||
submodules_shallow={'linux'},
|
||||
),
|
||||
'modules': Component(
|
||||
lambda arch: run_cmd(['build-modules'], arch),
|
||||
),
|
||||
'm5': Component(
|
||||
lambda arch: run_cmd(['build-m5'], arch),
|
||||
submodules={'gem5'},
|
||||
),
|
||||
'qemu': Component(
|
||||
lambda arch: run_cmd(['build-qemu'], arch),
|
||||
apt_build_deps={'qemu'},
|
||||
apt_get_pkgs={'libsdl2-dev'},
|
||||
submodules={'qemu'},
|
||||
),
|
||||
'qemu-user': Component(
|
||||
lambda arch: run_cmd(['build-qemu', '--userland'], arch),
|
||||
apt_build_deps = {'qemu'},
|
||||
apt_get_pkgs={'libsdl2-dev'},
|
||||
submodules = {'qemu'},
|
||||
),
|
||||
'parsec-benchmark': Component(
|
||||
submodules = {'parsec-benchmark'},
|
||||
),
|
||||
'userland': Component(
|
||||
lambda arch: run_cmd(['build-userland'], arch),
|
||||
@@ -212,10 +264,18 @@ group.add_argument('-a', '--arch', choices=common.arch_choices, default=[], acti
|
||||
Build the selected components for this arch. Select multiple archs by
|
||||
passing this option multiple times. Default: [{}]
|
||||
'''.format(common.default_arch))
|
||||
parser.add_argument('-D', '--download-dependencies', default=False, action='store_true', help='''\
|
||||
Also download all dependencies required for a given build: Ubuntu packages,
|
||||
Python packages and git submodules.
|
||||
''')
|
||||
parser.add_argument('--extra-args', default='', help='''\
|
||||
Extra args to pass to all scripts.
|
||||
'''
|
||||
)
|
||||
parser.add_argument('--travis', default=False, action='store_true', help='''\
|
||||
Extra args to pass to all scripts.
|
||||
'''
|
||||
)
|
||||
parser.add_argument('components', choices=list(name_to_component_map.keys()) + [[]], default=[], nargs='*', help='''\
|
||||
Which components to build.
|
||||
'''.format(common.default_arch))
|
||||
@@ -246,7 +306,7 @@ selected_components = []
|
||||
selected_component_name_set = set()
|
||||
for component_name in components:
|
||||
todo = [component_name]
|
||||
while todo != []:
|
||||
while todo:
|
||||
current_name = todo.pop(0)
|
||||
if current_name not in selected_component_name_set:
|
||||
selected_component_name_set.add(current_name)
|
||||
@@ -254,6 +314,142 @@ for component_name in components:
|
||||
selected_components.append(component)
|
||||
todo.extend(component.dependencies)
|
||||
|
||||
|
||||
if args.download_dependencies:
|
||||
apt_get_pkgs = {
|
||||
# TODO: figure out what needs those exactly.
|
||||
'automake',
|
||||
'build-essential',
|
||||
'coreutils',
|
||||
'cpio',
|
||||
'libguestfs-tools',
|
||||
'moreutils', # ts
|
||||
'rsync',
|
||||
'unzip',
|
||||
'wget',
|
||||
|
||||
# Linux kernel build dependencies.
|
||||
'bison',
|
||||
'flex',
|
||||
# Without this started failing in kernel 4.15 with:
|
||||
# Makefile:932: *** "Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel". Stop.
|
||||
'libelf-dev',
|
||||
|
||||
# Our misc stuff.
|
||||
'git',
|
||||
'python3-pip',
|
||||
'tmux',
|
||||
'vinagre',
|
||||
|
||||
# Userland.
|
||||
'gcc-aarch64-linux-gnu',
|
||||
'gcc-arm-linux-gnueabihf',
|
||||
'g++-aarch64-linux-gnu',
|
||||
'g++-arm-linux-gnueabihf',
|
||||
}
|
||||
apt_build_deps = set()
|
||||
submodules = set()
|
||||
submodules_shallow = set()
|
||||
python2_pkgs = set()
|
||||
python3_pkgs = {
|
||||
'pexpect==4.6.0',
|
||||
}
|
||||
for component in selected_components:
|
||||
apt_get_pkgs.update(component.apt_get_pkgs)
|
||||
apt_build_deps.update(component.apt_build_deps)
|
||||
submodules.update(component.submodules)
|
||||
submodules_shallow.update(component.submodules_shallow)
|
||||
python2_pkgs.update(component.python2_pkgs)
|
||||
python3_pkgs.update(component.python3_pkgs)
|
||||
if apt_get_pkgs or apt_build_deps:
|
||||
if args.travis:
|
||||
interacive_pkgs = {
|
||||
'libsdl2-dev',
|
||||
}
|
||||
apt_get_pkgs.difference_update(interacive_pkgs)
|
||||
if common.in_docker:
|
||||
sudo = ['sudo']
|
||||
# https://askubuntu.com/questions/909277/avoiding-user-interaction-with-tzdata-when-installing-certbot-in-a-docker-contai
|
||||
os.environ['DEBIAN_FRONTEND'] = 'noninteractive'
|
||||
# https://askubuntu.com/questions/496549/error-you-must-put-some-source-uris-in-your-sources-list
|
||||
with open(os.path.join('/etc', 'apt', 'sources.list'), 'r') as f:
|
||||
sources_txt = f.read()
|
||||
sources_txt = re.sub('^# deb-src ' 'deb-src ', sources_txt)
|
||||
with open(os.path.join('/etc', 'apt', 'sources.list'), 'w') as f:
|
||||
f.write(sources_txt)
|
||||
else:
|
||||
sudo = []
|
||||
if common.in_docker or args.travis:
|
||||
y = ['-y']
|
||||
else:
|
||||
y = []
|
||||
common.run_cmd(
|
||||
sudo + ['apt-get', 'update', common.Newline]
|
||||
)
|
||||
if apt_get_pkgs:
|
||||
common.run_cmd(
|
||||
sudo + ['apt-get', 'install'] + y + [common.Newline] +
|
||||
common.add_newlines(sorted(apt_get_pkgs))
|
||||
)
|
||||
if apt_build_deps:
|
||||
common.run_cmd(
|
||||
sudo +
|
||||
['apt-get', 'build-dep'] + y + [common.Newline] +
|
||||
common.add_newlines(sorted(apt_build_deps))
|
||||
)
|
||||
if python2_pkgs:
|
||||
common.run_cmd(
|
||||
['python', '-m', 'pip', 'install', '--user', common.Newline] +
|
||||
common.add_newlines(sorted(python2_pkgs))
|
||||
)
|
||||
if python3_pkgs:
|
||||
# Not with pip executable directly:
|
||||
# https://stackoverflow.com/questions/49836676/error-after-upgrading-pip-cannot-import-name-main/51846054#51846054
|
||||
common.run_cmd(
|
||||
['python3', '-m', 'pip', 'install', '--user', common.Newline] +
|
||||
common.add_newlines(sorted(python3_pkgs))
|
||||
)
|
||||
git_cmd_common = ['git', 'submodule', 'update', '--init', '--recursive']
|
||||
if submodules:
|
||||
# == Other nice git options for when distros move to newer Git
|
||||
#
|
||||
# Currently not on Ubuntu 16.04:
|
||||
#
|
||||
# `--progress`: added on Git 2.10:
|
||||
#
|
||||
# * https://stackoverflow.com/questions/32944468/how-to-show-progress-for-submodule-fetching
|
||||
# * https://stackoverflow.com/questions/4640020/progress-indicator-for-git-clone
|
||||
#
|
||||
# `--jobs"`: https://stackoverflow.com/questions/26957237/how-to-make-git-clone-faster-with-multiple-threads/52327638#52327638
|
||||
common.run_cmd(
|
||||
git_cmd_common + ['--', common.Newline] +
|
||||
common.add_newlines([os.path.join(common.submodules_dir, x) for x in sorted(submodules)])
|
||||
)
|
||||
if submodules_shallow:
|
||||
# == Shallow cloning.
|
||||
#
|
||||
# TODO Ideally we should shallow clone --depth 1 all of them.
|
||||
#
|
||||
# However, most git servers out there are crap or craply configured
|
||||
# and don't allow shallow cloning except for branches.
|
||||
#
|
||||
# So for now, let's shallow clone only the Linux kernel, which has by far
|
||||
# the largest .git repo history, and full clone the others.
|
||||
#
|
||||
# Then we will maintain a GitHub Linux kernel mirror / fork that always has a
|
||||
# lkmc branch, and point to it, so that it will always succeed.
|
||||
#
|
||||
# See also:
|
||||
#
|
||||
# * https://stackoverflow.com/questions/3489173/how-to-clone-git-repository-with-specific-revision-changeset
|
||||
# * https://stackoverflow.com/questions/2144406/git-shallow-submodules/47374702#47374702
|
||||
# * https://unix.stackexchange.com/questions/338578/why-is-the-git-clone-of-the-linux-kernel-source-code-much-larger-than-the-extrac
|
||||
#
|
||||
common.run_cmd(
|
||||
git_cmd_common + ['--depth', '1', '--', common.Newline] +
|
||||
common.add_newlines([os.path.join(common.submodules_dir, x) for x in sorted(submodules_shallow)])
|
||||
)
|
||||
|
||||
# Do the build.
|
||||
for arch in archs:
|
||||
for component in selected_components:
|
||||
|
||||
@@ -24,11 +24,16 @@ import urllib
|
||||
import urllib.request
|
||||
|
||||
this_module = sys.modules[__name__]
|
||||
# https://stackoverflow.com/questions/20010199/how-to-determine-if-a-process-runs-inside-lxc-docker
|
||||
in_docker = os.path.exists('/.dockerenv')
|
||||
root_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
data_dir = os.path.join(root_dir, 'data')
|
||||
p9_dir = os.path.join(data_dir, '9p')
|
||||
gem5_non_default_src_root_dir = os.path.join(data_dir, 'gem5')
|
||||
out_dir = os.path.join(root_dir, 'out')
|
||||
if in_docker:
|
||||
out_dir = os.path.join(root_dir, 'out.docker')
|
||||
else:
|
||||
out_dir = os.path.join(root_dir, 'out')
|
||||
bench_boot = os.path.join(out_dir, 'bench-boot.txt')
|
||||
packages_dir = os.path.join(root_dir, 'buildroot_packages')
|
||||
kernel_modules_subdir = 'kernel_modules'
|
||||
|
||||
@@ -1,241 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Download build dependencies for the build, notably:
|
||||
#
|
||||
# - required git submodules
|
||||
# - host packages
|
||||
#
|
||||
# Only needs to be run once before each type of build per-system.
|
||||
|
||||
set -eux
|
||||
all=false
|
||||
apt_get=true
|
||||
baremetal=false
|
||||
baremetal_given=false
|
||||
buildroot=true
|
||||
buildroot_given=false
|
||||
linux=true
|
||||
linux_given=false
|
||||
interactive_pkgs=libsdl2-dev
|
||||
parsec_benchmark_given=false
|
||||
gem5=false
|
||||
gem5_given=false
|
||||
qemu=true
|
||||
qemu_given=false
|
||||
submodules_dir=submodules
|
||||
submodules=
|
||||
y=
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--all)
|
||||
all=true
|
||||
shift
|
||||
;;
|
||||
--baremetal)
|
||||
baremetal_given=true
|
||||
shift
|
||||
;;
|
||||
--buildroot)
|
||||
buildroot_given=true
|
||||
shift
|
||||
;;
|
||||
--gem5)
|
||||
gem5_given=true
|
||||
shift
|
||||
;;
|
||||
--parsec-benchmark)
|
||||
parsec_benchmark_given=true
|
||||
shift
|
||||
;;
|
||||
--qemu)
|
||||
qemu_given=true
|
||||
shift
|
||||
;;
|
||||
--no-apt-get)
|
||||
apt_get=false
|
||||
shift
|
||||
;;
|
||||
--travis)
|
||||
interactive_pkgs=
|
||||
y=-y
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo 'unknown option' 1>&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
if ! "$all" && "$gem5_given" && ! "$qemu_given"; then
|
||||
qemu=false
|
||||
fi
|
||||
if "$all" || "$gem5_given"; then
|
||||
gem5=true
|
||||
fi
|
||||
if "$all" || "$baremetal_given"; then
|
||||
baremetal=true
|
||||
fi
|
||||
if ! "$all" && "$baremetal_given" && ! "$buildroot_given"; then
|
||||
buildroot=false
|
||||
fi
|
||||
if "$all" || "$parsec_benchmark_given"; then
|
||||
submodules="${submodules} parsec-benchmark"
|
||||
fi
|
||||
|
||||
if "$apt_get"; then
|
||||
pkgs="\
|
||||
automake \
|
||||
bc \
|
||||
bison \
|
||||
build-essential \
|
||||
ccache \
|
||||
coreutils \
|
||||
cpio \
|
||||
flex \
|
||||
gcc-aarch64-linux-gnu \
|
||||
gcc-arm-linux-gnueabihf \
|
||||
g++-aarch64-linux-gnu \
|
||||
g++-arm-linux-gnueabihf \
|
||||
git \
|
||||
libguestfs-tools \
|
||||
moreutils \
|
||||
python3-pip \
|
||||
rsync \
|
||||
tmux \
|
||||
unzip \
|
||||
vinagre \
|
||||
wget \
|
||||
"
|
||||
# gem5 users Python 2.
|
||||
pip2_pkgs="\
|
||||
"
|
||||
pip3_pkgs="\
|
||||
pexpect==4.6.0 \
|
||||
"
|
||||
if "$gem5"; then
|
||||
pkgs="${pkgs}\
|
||||
diod \
|
||||
libgoogle-perftools-dev \
|
||||
protobuf-compiler \
|
||||
python-dev \
|
||||
python-pip \
|
||||
scons \
|
||||
"
|
||||
pip2_pkgs="${pip2_pkgs}\
|
||||
pydot \
|
||||
"
|
||||
fi
|
||||
if "$baremetal"; then
|
||||
# http://crosstool-ng.github.io/docs/os-setup/
|
||||
pkgs="${pkgs} \
|
||||
bison \
|
||||
docbook2x \
|
||||
flex \
|
||||
gcc \
|
||||
gperf \
|
||||
help2man \
|
||||
libncurses5-dev \
|
||||
libtool-bin \
|
||||
make \
|
||||
python-dev \
|
||||
texinfo \
|
||||
"
|
||||
fi
|
||||
command -v apt-get >/dev/null 2>&1 || {
|
||||
cat <<EOF
|
||||
apt-get not found. You're on your own for installing dependencies.
|
||||
|
||||
On Ubuntu they are:
|
||||
|
||||
$pkgs
|
||||
EOF
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Without this started failing in kernel 4.15 with:
|
||||
# Makefile:932: *** "Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel". Stop.
|
||||
pkgs="$pkgs libelf-dev"
|
||||
|
||||
# https://stackoverflow.com/questions/20010199/determining-if-a-process-runs-inside-lxc-docker
|
||||
if [ -f /.dockerenv ]; then
|
||||
# https://askubuntu.com/questions/909277/avoiding-user-interaction-with-tzdata-when-installing-certbot-in-a-docker-contai
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
mysudo=
|
||||
# https://askubuntu.com/questions/496549/error-you-must-put-some-source-uris-in-your-sources-list
|
||||
sed -Ei 's/^# deb-src /deb-src /' /etc/apt/sources.list
|
||||
y=-y
|
||||
else
|
||||
mysudo=sudo
|
||||
fi
|
||||
$mysudo apt-get update $y
|
||||
# Building SDL for QEMU in Buildroot was rejected upstream because it adds many dependencies:
|
||||
# https://patchwork.ozlabs.org/patch/770684/
|
||||
# We are just using the host SDL for now, if it causes too much problems we might remove it.
|
||||
# libsdl2-dev needs to be installed separately from sudo apt-get build-dep qemu
|
||||
# because Ubuntu 16.04's QEMU uses SDL 1.
|
||||
$mysudo apt-get install $y \
|
||||
$pkgs \
|
||||
$interactive_pkgs \
|
||||
;
|
||||
if "$qemu"; then
|
||||
$mysudo apt-get build-dep $y qemu
|
||||
fi
|
||||
# Generate graphs of config.ini under m5out.
|
||||
# Not with pip directly:
|
||||
# https://stackoverflow.com/questions/49836676/error-after-upgrading-pip-cannot-import-name-main/51846054#51846054
|
||||
if "$gem5"; then
|
||||
python -m pip install --user $pip2_pkgs
|
||||
fi
|
||||
python3 -m pip install --user $pip3_pkgs
|
||||
fi
|
||||
|
||||
## Submodules
|
||||
|
||||
if "$baremetal"; then
|
||||
submodules="${submodules} crosstool-ng"
|
||||
fi
|
||||
if "$buildroot"; then
|
||||
submodules="${submodules} buildroot"
|
||||
fi
|
||||
if "$qemu"; then
|
||||
submodules="${submodules} qemu"
|
||||
fi
|
||||
if "$gem5"; then
|
||||
submodules="${submodules} gem5"
|
||||
fi
|
||||
submodules="$(for submodule in ${submodules}; do printf "${submodules_dir}/${submodule} "; done)"
|
||||
|
||||
# == Shallow cloning.
|
||||
#
|
||||
# TODO Ideally we should shallow clone --depth 1 all of them.
|
||||
#
|
||||
# However, most git servers out there are crap or craply configured
|
||||
# and don't allow shallow cloning except for branches.
|
||||
#
|
||||
# So for now, let's shallow clone only the Linux kernel, which has by far
|
||||
# the largest .git repo history, and full clone the others.
|
||||
#
|
||||
# Then we will maintain a GitHub Linux kernel mirror / fork that always has a
|
||||
# lkmc branch, and point to it, so that it will always succeed.
|
||||
#
|
||||
# See also:
|
||||
#
|
||||
# * https://stackoverflow.com/questions/3489173/how-to-clone-git-repository-with-specific-revision-changeset
|
||||
# * https://stackoverflow.com/questions/2144406/git-shallow-submodules/47374702#47374702
|
||||
# * https://unix.stackexchange.com/questions/338578/why-is-the-git-clone-of-the-linux-kernel-source-code-much-larger-than-the-extrac
|
||||
#
|
||||
# == Other nice git options for when distros move to newer Git
|
||||
#
|
||||
# Currently not on Ubuntu 16.04:
|
||||
#
|
||||
# `--progress`: added on Git 2.10:
|
||||
#
|
||||
# * https://stackoverflow.com/questions/32944468/how-to-show-progress-for-submodule-fetching
|
||||
# * https://stackoverflow.com/questions/4640020/progress-indicator-for-git-clone
|
||||
#
|
||||
# `--jobs"`: https://stackoverflow.com/questions/26957237/how-to-make-git-clone-faster-with-multiple-threads/52327638#52327638
|
||||
#
|
||||
git submodule update --init --recursive -- ${submodules}
|
||||
if "$linux"; then
|
||||
git submodule update --depth 1 --init --recursive -- "${submodules_dir}/linux"
|
||||
fi
|
||||
41
run
41
run
@@ -117,11 +117,13 @@ def main(args, extra_args=None):
|
||||
trace_type = args.trace
|
||||
|
||||
def raise_rootfs_not_found():
|
||||
raise Exception('Root filesystem not found. Did you build it?\n' \
|
||||
'Tried to use: ' + common.disk_image)
|
||||
if not args.dry_run:
|
||||
raise Exception('Root filesystem not found. Did you build it?\n' \
|
||||
'Tried to use: ' + common.disk_image)
|
||||
def raise_image_not_found():
|
||||
raise Exception('Executable image not found. Did you build it?\n' \
|
||||
'Tried to use: ' + common.image)
|
||||
if not args.dry_run:
|
||||
raise Exception('Executable image not found. Did you build it?\n' \
|
||||
'Tried to use: ' + common.image)
|
||||
if common.image is None:
|
||||
raise Exception('Baremetal ELF file not found. Tried:\n' + '\n'.join(paths))
|
||||
cmd = debug_vm.copy()
|
||||
@@ -247,7 +249,7 @@ def main(args, extra_args=None):
|
||||
else:
|
||||
qemu_executable = common.qemu_executable
|
||||
qemu_found = os.path.exists(qemu_executable)
|
||||
if not qemu_found:
|
||||
if not qemu_found and not args.dry_run:
|
||||
raise Exception('QEMU executable not found, did you forget to build or install it?\n' \
|
||||
'Tried to use: ' + qemu_executable)
|
||||
if args.debug_vm:
|
||||
@@ -392,19 +394,20 @@ def main(args, extra_args=None):
|
||||
panic_msg = b'Kernel panic - not syncing'
|
||||
panic_re = re.compile(panic_msg)
|
||||
error_string_found = False
|
||||
with open(common.termout_file, 'br') as logfile:
|
||||
for line in logfile:
|
||||
if panic_re.search(line):
|
||||
error_string_found = True
|
||||
with open(common.guest_terminal_file, 'br') as logfile:
|
||||
lines = logfile.readlines()
|
||||
if lines:
|
||||
last_line = lines[-1]
|
||||
if last_line.rstrip() == common.magic_fail_string:
|
||||
error_string_found = True
|
||||
if error_string_found:
|
||||
common.log_error('simulation error detected by parsing logs')
|
||||
return 1
|
||||
if out_file is not None and not args.dry_run:
|
||||
with open(common.termout_file, 'br') as logfile:
|
||||
for line in logfile:
|
||||
if panic_re.search(line):
|
||||
error_string_found = true
|
||||
with open(common.guest_terminal_file, 'br') as logfile:
|
||||
lines = logfile.readlines()
|
||||
if lines:
|
||||
last_line = lines[-1]
|
||||
if last_line.rstrip() == common.magic_fail_string:
|
||||
error_string_found = true
|
||||
if error_string_found:
|
||||
common.log_error('simulation error detected by parsing logs')
|
||||
return 1
|
||||
return 0
|
||||
|
||||
def get_argparse():
|
||||
@@ -447,7 +450,7 @@ Example: `./run -a arm -e 'init=/poweroff.out'`
|
||||
'''
|
||||
)
|
||||
parser.add_argument(
|
||||
'-F', '--eval-busybox',
|
||||
'-F', '--eval-after-init',
|
||||
help='''\
|
||||
Pass a base64 encoded command line parameter that gets evalled by the Busybox init.
|
||||
See: https://github.com/cirosantilli/linux-kernel-module-cheat#init-busybox
|
||||
|
||||
24
run-docker
24
run-docker
@@ -1,18 +1,24 @@
|
||||
#!/usr/bin/env bash
|
||||
set -eu
|
||||
cmd="${1:-start}"
|
||||
cmd="$1"
|
||||
shift
|
||||
container_name=lkmc
|
||||
target_dir=/root/linux-kernel-module-cheat
|
||||
if [ "$cmd" = start ]; then
|
||||
sudo docker start -ai "$container_name"
|
||||
elif [ "$cmd" = sh ]; then
|
||||
# https://stackoverflow.com/questions/39794509/how-to-open-multiple-terminals-in-docker
|
||||
sudo docker exec -it "$container_name" bash "$@"
|
||||
elif [ "$cmd" = setup ]; then
|
||||
if [ "$cmd" = create ]; then
|
||||
# --privileged for KVM:
|
||||
# https://stackoverflow.com/questions/48422001/launching-qemu-kvm-from-inside-docker-container
|
||||
sudo docker run --name "$container_name" --net host -i --privileged -t -w "${target_dir}" -v "$(pwd):${target_dir}" ubuntu:18.04 bash
|
||||
elif [ "$cmd" = DELETE ]; then
|
||||
sudo docker create --name "$container_name" --net host -i --privileged -t -w "${target_dir}" -v "$(pwd):${target_dir}" ubuntu:18.04 bash
|
||||
elif [ "$cmd" = start ]; then
|
||||
sudo docker start "$container_name"
|
||||
elif [ "$cmd" = stop ]; then
|
||||
sudo docker stop "$container_name"
|
||||
elif [ "$cmd" = sh ]; then
|
||||
if [ "$#" -gt 0 ]; then
|
||||
c=-c
|
||||
fi
|
||||
# https://stackoverflow.com/questions/39794509/how-to-open-multiple-terminals-in-docker
|
||||
sudo docker exec -it "$container_name" bash $c "$*"
|
||||
elif [ "$cmd" = DESTROY ]; then
|
||||
sudo docker rm "$container_name"
|
||||
else
|
||||
echo "error: unknown action: ${cmd}" 1>&2
|
||||
|
||||
Reference in New Issue
Block a user