the docker setup is perfect

This commit is contained in:
Ciro Santilli 六四事件 法轮功
2018-11-11 00:00:01 +00:00
parent 403d4a9d06
commit a06872241b
6 changed files with 133 additions and 106 deletions

4
.dockerignore Normal file
View File

@@ -0,0 +1,4 @@
# Ignore everything, since we get the repository files
# with a volume.
*
.*

10
Dockerfile Normal file
View File

@@ -0,0 +1,10 @@
# https://github.com/cirosantilli/linux-kernel-module-cheat#docker
FROM ubuntu:18.04
RUN apt update
# Minimum requirements to run ./build --download-dependencies
RUN apt-get install -y \
git \
python3 \
python3-distutils \
;
CMD bash

View File

@@ -392,10 +392,9 @@ This is a good option if you are on a Linux host, but the native setup failed du
For example, to do a <<qemu-buildroot-setup>> inside Docker, run:
....
sudo apt-get install docker && \
sudo apt-get install docker
./run-docker create && \
./run-docker start && \
./run-docker sh ./build --download-dependencies && \
./run-docker sh -- ./build --download-dependencies
./run-docker sh
....
@@ -405,35 +404,50 @@ You are now left inside a shell in the Docker! From there, just run as usual:
./run
....
The host git top level directory is mounted inside the guest with a link:https://stackoverflow.com/questions/23439126/how-to-mount-a-host-directory-in-a-docker-container[Docker volume], which means for example that you can use your host's GUI text editor directly on the files. Just don't forget that if you nuke that directory on the guest, then it gets nuked on the host as well!
Command breakdown:
* `./run-docker create`: create the container.
* `./run-docker create`: create the image and container.
+
Needed only the very first time you use Docker, or if you run `./run-docker DESTROY` to restart for scratch, or save some disk space.
+
The container name is `lkmc` and shows up in the list of all containers:
The image and container name is `lkmc`. The container shows under:
+
....
docker ps -a
....
* `./run-docker start`: start the container as daemon on the background.
+
Needed only after reboot, or if you run `./run-docker stop` to save CPU or memory resources.
+
The container can now be seen on the list of running containers:
and the image shows under:
+
....
docker ps
docker images
....
* `./run-docker sh`: open a shell on the container.
+
If it has not been started previously, start it. This can also be done explicitly with:
+
....
./run-docker start
....
* `./run-docker sh`: open a shell on a previously started Docker daemon.
+
Quit the shell as usual with `Ctrl-D`
+
Can be called multiple times from different host terminals to open multiple shells.
This can be called multiple times from different host terminals to open multiple shells.
* `./run-docker stop`: stop the container.
+
This might save a bit of CPU and RAM once you stop working on this project, but it should not be a lot.
* `./run-docker DESTROY`: delete the container and image.
+
This doesn't really clean the build, since we mount the guest's working directory on the host git top-level, so you basically just got rid of the `apt-get` installs.
+
To actually delete the Docker build, run on host:
+
....
# sudo rm -rf out.docker
....
The host git top level directory is mounted inside the guest with a link:https://stackoverflow.com/questions/23439126/how-to-mount-a-host-directory-in-a-docker-container[Docker volume], which means for example that you can use your host's GUI text editor directly on the files. Just don't forget that if you nuke that directory on the guest, then it gets nuked on the host as well!
In order to use functionality such as <<gdb>> from inside Docker, you need a second shell inside the container. You can either do that from another shell with:
To use <<gdb>> from inside Docker, you need a second shell inside the container. You can either do that from another shell with:
....
./run-docker sh
@@ -441,14 +455,12 @@ In order to use functionality such as <<gdb>> from inside Docker, you need a sec
or even better, by starting a <<tmux>> session inside the container. We install `tmux` by default in the container.
You can start a second shell and run a command in it at the same time with:
You can also start a second shell and run a command in it at the same time with:
....
./run-docker sh ./run-gdb start_kernel
./run-docker sh -- ./run-gdb start_kernel
....
Docker stops if and only if you quit the initial shell, you can quit this one without consequences.
To use <<qemu-graphic-mode>> from Docker, run:
....
@@ -462,21 +474,11 @@ sudo apt-get install vinagre
./vnc
....
When you do:
TODO make files created inside Docker be owned by the current user in host instead of `root`:
....
./run-docker DESTROY
....
you don't really destroy the build, since we mount the guest's working directory on the host git top-level, so you basically just get rid of the `apt-get` installs.
To actually delete the Docker build, run:
....
# sudo rm -rf out.docker
....
TODO make files created inside Docker be owned by the current user in host instead of `root`: https://stackoverflow.com/questions/23544282/what-is-the-best-way-to-manage-permissions-for-docker-shared-volumes
* https://stackoverflow.com/questions/33681396/how-do-i-write-to-a-volume-container-as-non-root-in-docker
* https://stackoverflow.com/questions/23544282/what-is-the-best-way-to-manage-permissions-for-docker-shared-volumes
* https://stackoverflow.com/questions/31779802/shared-volume-file-permissions-ownership-docker
[[prebuilt]]
=== Prebuilt Buildroot setup
@@ -717,8 +719,8 @@ Our C bare-metal compiler is built with link:https://github.com/crosstool-ng/cro
QEMU:
....
./build --arch arm --download-dependencies qemu-baremetal
./run --arch arm --baremetal interactive/prompt
./build --arch aarch64 --download-dependencies qemu-baremetal
./run --arch aarch64 --baremetal interactive/prompt
....
You are now left inside QEMU running the tiny baremetal system link:baremetal/interactive/prompt.c[], which uses the UART to:
@@ -749,7 +751,7 @@ vim baremetal/interactive/prompt.c
and run:
....
./build-baremetal --arch arm
./build-baremetal --arch aarch64
....
`./build qemu-baremetal` had called link:build-baremetal[] for us previously, in addition to its requirements. `./build-baremetal` uses crosstool-NG, and so it must be preceded by link:build-crosstool-ng[], which `./build qemu-baremetal` also calls.
@@ -757,33 +759,33 @@ and run:
Every `.c` file inside link:baremetal/[] and `.S` file inside `baremetal/arch/<arch>/` generates a separate baremetal image. You can run a different image with commands such as:
....
./run --arch arm --baremetal exit
./run --arch arm --baremetal arch/arm/add
./run --arch aarch64 --baremetal exit
./run --arch aarch64 --baremetal arch/aarch64/add
....
which will run respectively:
* link:baremetal/exit.c[]
* link:baremetal/arch/arm/add.S[]
* link:baremetal/arch/aarch64/add.S[]
Alternatively, for the sake of tab completion, we also accept relative paths inside `baremetal/`:
....
./run --arch arm --baremetal baremetal/exit.c
./run --arch arm --baremetal baremetal/arch/arm/add.S
./run --arch aarch64 --baremetal baremetal/exit.c
./run --arch aarch64 --baremetal baremetal/arch/aarch64/add.S
....
Absolute paths however are used as is and must point to the actual executable:
....
./run --arch arm --baremetal "$(./getvar --arch arm baremetal_build_dir)/exit.elf"
./run --arch aarch64 --baremetal "$(./getvar --arch aarch64 baremetal_build_dir)/exit.elf"
....
To use gem5 instead of QEMU do:
....
./build --download-dependencies gem5-baremetal
./run --arch arm --baremetal interactive/prompt --gem5
./run --arch aarch64 --baremetal interactive/prompt --gem5
....
and then <<qemu-buildroot-setup,as usual>> open a shell with:
@@ -813,15 +815,15 @@ The reason for that is that on baremetal we don't parse the <<device-tree,device
`gem5` also supports the `RealViewPBX` machine, which represents an older hardware compared to the default `VExpress_GEM5_V1`:
....
./build-baremetal --arch arm --gem5 --machine RealViewPBX
./run --arch arm --baremetal interactive/prompt --gem5 --machine RealViewPBX
./build-baremetal --arch aarch64 --gem5 --machine RealViewPBX
./run --arch aarch64 --baremetal interactive/prompt --gem5 --machine RealViewPBX
....
This generates yet new separate images with new magic constants:
....
echo "$(./getvar --arch arm --baremetal interactive/prompt --gem5 --machine VExpress_GEM5_V1 image)"
echo "$(./getvar --arch arm --baremetal interactive/prompt --gem5 --machine RealViewPBX image)"
echo "$(./getvar --arch aarch64 --baremetal interactive/prompt --gem5 --machine VExpress_GEM5_V1 image)"
echo "$(./getvar --arch aarch64 --baremetal interactive/prompt --gem5 --machine RealViewPBX image)"
....
But just stick to newer and better `VExpress_GEM5_V1` unless you have a good reason to use `RealViewPBX`.

52
build
View File

@@ -57,6 +57,28 @@ def run_cmd(cmd, arch):
buildroot_component = Component(
lambda arch: run_cmd(['build-buildroot'], arch),
submodules = {'buildroot'},
# https://buildroot.org/downloads/manual/manual.html#requirement
apt_get_pkgs={
'bash',
'bc',
'binutils',
'build-essential',
'bzip2',
'cpio',
'g++',
'gcc',
'graphviz',
'gzip',
'make',
'patch',
'perl',
'python-matplotlib',
'python3',
'rsync',
'sed',
'tar',
'unzip',
},
)
name_to_component_map = {
@@ -86,6 +108,7 @@ name_to_component_map = {
'bison',
'docbook2x',
'flex',
'gawk',
'gcc',
'gperf',
'help2man',
@@ -124,6 +147,13 @@ name_to_component_map = {
'linux': Component(
lambda arch: run_cmd(['build-linux'], arch),
submodules_shallow={'linux'},
apt_get_pkgs={
'bison',
'flex',
# Without this started failing in kernel 4.15 with:
# Makefile:932: *** "Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel". Stop.
'libelf-dev',
},
),
'modules': Component(
lambda arch: run_cmd(['build-modules'], arch),
@@ -317,29 +347,13 @@ for component_name in components:
if args.download_dependencies:
apt_get_pkgs = {
# TODO: figure out what needs those exactly.
'automake',
'build-essential',
'coreutils',
'cpio',
'libguestfs-tools',
'moreutils', # ts
'rsync',
'unzip',
'wget',
# Linux kernel build dependencies.
'bison',
'flex',
# Without this started failing in kernel 4.15 with:
# Makefile:932: *** "Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel". Stop.
'libelf-dev',
# Our misc stuff.
# Core requirements for this repo.
'git',
'moreutils', # ts
'python3-pip',
'tmux',
'vinagre',
'wget',
# Userland.
'gcc-aarch64-linux-gnu',

2
run
View File

@@ -463,7 +463,7 @@ Pass an extra Linux kernel command line options, add a dash `-`
separator, and place the options after the dash. Intended for custom
options understood by our `init` scripts, most of which are prefixed
by `lkmc_`.
Example: `./run -f 'lkmc_eval="wget google.com" lkmc_lala=y'`
Example: `./run --kernel-cli-after-dash 'lkmc_eval="wget google.com" lkmc_lala=y'`
Mnenomic: `-f` comes after `-e`.
'''
)

View File

@@ -10,53 +10,50 @@ container_hostname = common.repo_short_id
image_name = common.repo_short_id
target_dir = '/root/{}'.format(common.repo_short_id)
docker = ['sudo', 'docker']
def sh(args):
if args:
sh_args = ['-c'] + args
else:
sh_args = []
def create(args):
common.run_cmd(docker + ['build', '-t', image_name, '.', common.Newline])
# --privileged for KVM:
# https://stackoverflow.com/questions/48422001/launching-qemu-kvm-from-inside-docker-container
common.run_cmd(
docker +
[
'exec',
'-i',
'-t',
container_name,
'bash',
] +
'create', common.Newline,
'--hostname', container_hostname, common.Newline,
'-i', common.Newline,
'--name', container_name, common.Newline,
'--net', 'host', common.Newline,
'--privileged', common.Newline,
'-t', common.Newline,
'-w', target_dir, common.Newline,
'-v', '{}:{}'.format(os.getcwd(), target_dir), common.Newline,
image_name,
]
)
def destroy(args):
stop(args)
common.run_cmd(docker + ['rm', container_name, common.Newline])
common.run_cmd(docker + ['rmi', image_name, common.Newline])
def sh(args):
start(args)
if args:
sh_args = args
else:
sh_args = ['bash']
common.run_cmd(
docker + ['exec', '-i', '-t', container_name] +
sh_args +
[common.Newline],
)
def start(args):
common.run_cmd(docker + ['start', container_name, common.Newline])
def stop(args):
common.run_cmd(docker + ['stop', container_name, common.Newline])
cmd_action_map = {
'create': lambda args:
# --privileged for KVM:
# https://stackoverflow.com/questions/48422001/launching-qemu-kvm-from-inside-docker-container
common.run_cmd(
docker +
[
'create', common.Newline,
'--hostname', container_hostname, common.Newline,
'-i', common.Newline,
'--name', container_name, common.Newline,
'--net', 'host', common.Newline,
'--privileged', common.Newline,
'-t', common.Newline,
'-w', target_dir, common.Newline,
'-v', '{}:{}'.format(os.getcwd(), target_dir), common.Newline,
'ubuntu:18.04', common.Newline,
'bash', common.Newline,
]
),
'start': lambda args:
common.run_cmd(docker + [ 'start', container_name, common.Newline])
,
'stop': lambda args:
common.run_cmd(docker + ['stop', container_name, common.Newline])
,
'create': lambda args: create(args),
'DESTROY': lambda args: destroy(args),
'sh': lambda args: sh(args),
'DESTROY': lambda args:
common.run_cmd(docker + [ 'rm', container_name, common.Newline])
,
'start': lambda args: start(args),
'stop': lambda args: stop(args),
}
parser = argparse.ArgumentParser()
parser.add_argument('cmd', choices=cmd_action_map)