improve the release procedure

This commit is contained in:
Ciro Santilli 六四事件 法轮功
2019-01-22 00:00:00 +00:00
parent 5ba7b31357
commit 34085fd96d
16 changed files with 310 additions and 241 deletions

View File

@@ -219,7 +219,11 @@ insmod /mnt/9p/out_rootfs_overlay/hello.ko
and the new `pr_info` message should now show on the terminal at the end of the boot. and the new `pr_info` message should now show on the terminal at the end of the boot.
This works because we have a <<9p>> mount there setup by default, which makes a host directory available on the guest. This works because we have a <<9p>> mount there setup by default, which mounts the host directory that contains the Build outputs on the guest:
....
ls "$(./getvar out_rootfs_overlay_dir)"
....
The fast method is slightly risky because your previously insmodded buggy kernel module attempt might have corrupted the kernel memory, which could affect future runs. The fast method is slightly risky because your previously insmodded buggy kernel module attempt might have corrupted the kernel memory, which could affect future runs.
@@ -233,7 +237,7 @@ The safe way, is to fist quit QEMU, rebuild the modules, put them in the root fi
./run --eval-after 'insmod /hello.ko' ./run --eval-after 'insmod /hello.ko'
.... ....
`./build-buildroot` is required after `./build-modules` because it generates the root filesystem with the modules that we compiled at `./build-modules`. `./build-buildroot` is required after `./build-modules` because it re-generates the root filesystem with the modules that we compiled at `./build-modules`.
You can see that `./build` does that as well, by running: You can see that `./build` does that as well, by running:
@@ -844,9 +848,9 @@ For more information on baremetal, see the section: <<baremetal>>. The following
Much like <<baremetal-setup>>, this is another fun setup that does not require Buildroot or the Linux kernel. Much like <<baremetal-setup>>, this is another fun setup that does not require Buildroot or the Linux kernel.
See: <<user-mode-simulation>> Introduction at: <<user-mode-simulation>>.
TODO: test it out on a clean repo. Getting started at: <<qemu-user-mode>>.
[[gdb]] [[gdb]]
== GDB step debug == GDB step debug
@@ -3329,7 +3333,7 @@ This would have several advantages:
** no need to regenerate the root filesystem at all and reboot ** no need to regenerate the root filesystem at all and reboot
** overcomes the `check_bin_arch` problem: <<rpath>> ** overcomes the `check_bin_arch` problem: <<rpath>>
* we could keep the base root filesystem very small, which implies: * we could keep the base root filesystem very small, which implies:
** less host disk usage, no need to copy the entire `out_rootfs_overlay_dir` to the image again ** less host disk usage, no need to copy the entire `./getvar out_rootfs_overlay_dir` to the image again
** no need to worry about <<br2_target_rootfs_ext2_size>> ** no need to worry about <<br2_target_rootfs_ext2_size>>
We can already make host files appear on the guest with <<9p>>, but they appear on a subdirectory instead of the root. We can already make host files appear on the guest with <<9p>>, but they appear on a subdirectory instead of the root.
@@ -10244,6 +10248,8 @@ Once you've built a package in to the image, there is no easy way to remove it.
Documented at: link:https://github.com/buildroot/buildroot/blob/2017.08/docs/manual/rebuilding-packages.txt#L90[] Documented at: link:https://github.com/buildroot/buildroot/blob/2017.08/docs/manual/rebuilding-packages.txt#L90[]
Also mentioned at: https://stackoverflow.com/questions/47320800/how-to-clean-only-target-in-buildroot
See this for a sample manual workaround: <<parsec-uninstall>>. See this for a sample manual workaround: <<parsec-uninstall>>.
=== BR2_TARGET_ROOTFS_EXT2_SIZE === BR2_TARGET_ROOTFS_EXT2_SIZE
@@ -11351,6 +11357,7 @@ but note that this does not include script specific options.
You don't need to depend on GitHub: You don't need to depend on GitHub:
.... ....
sudo apt install asciidoctor
./build-doc ./build-doc
xdg-open out/README.html xdg-open out/README.html
.... ....
@@ -11903,11 +11910,9 @@ This directory is copied into the target filesystem by:
Source: link:copy-overlay[] Source: link:copy-overlay[]
`copy-overlay` by itself, only places the files into our intermediate `./getenv out_rootfs_overlay_dir` directory. Build Buildroot is required for the same reason as described at: <<your-first-kernel-module-hack>>.
This directory combines files from several sources, including for example link:build-userland[], which the final `./build-buildroot` puts into the root filesystem. However, since the link:rootfs_overlay[] directory does not require compilation, unlike say <<your-first-kernel-module-hack,kernel modules>>, we also make it <<9p>> available to the guest directly even without `./copy-overlay` at:
Since the link:rootfs_overlay[] directory does not require compilation, unlike say link:userland[] we also make it <<9p>> available to the guest directly even without `copy-overlay` at:
.... ....
ls /mnt/9p/rootfs_overlay ls /mnt/9p/rootfs_overlay
@@ -11915,15 +11920,6 @@ ls /mnt/9p/rootfs_overlay
This way you can just hack away the scripts and try them out immediately without any further operations. This way you can just hack away the scripts and try them out immediately without any further operations.
To add those scripts to the Buildroot root filesystem, you will need to run:
We could add that directory to `BR2_ROOTFS_OVERLAY` but we don't do this because this mechanism:
* also works for non Buildroot root filesystesms
* places everything in one place for a nice 9P mount
and maintaining `BR2_ROOTFS_OVERLAY` in addition to our mechanism would duplicate some logic.
=== Test this repo === Test this repo
==== Automated tests ==== Automated tests
@@ -11942,16 +11938,16 @@ Sources:
* link:build-test[] * link:build-test[]
* link:test[] * link:test[]
The link:test[] script runs several different types of tests, which can also be run separately as explained at:
* link:test-boot[]
* <<test-userland-in-full-system>>
* <<user-mode-tests>> * <<user-mode-tests>>
* <<baremetal-tests>> * <<baremetal-tests>>
* <<test-gdb>>
This is not all tests, because there are too many possible variations and that would take forever. link:test[] does not all possible tests, because there are too many possible variations and that would take forever. The rationale is the same as for `./build all` and is explained in `./build --help`.
Instead, we currently select on magic arch, currently `aarch64`, and for that arch run more stuff than on others.
We could in the future we will add an option to select the large arch, or do something smarter.
This full testing takes too much time to be feasible for every patch, but it should be done for every release.
See the sources of those test scripts to learn how to run more specialized tests. See the sources of those test scripts to learn how to run more specialized tests.
@@ -11970,12 +11966,6 @@ This command would run the test four times, using `x86_64` and `aarch64` with bo
Without those flags, it defaults to just running the default arch and emulator once: `x86_64` and `qemu`. Without those flags, it defaults to just running the default arch and emulator once: `x86_64` and `qemu`.
Test that the Internet works:
....
./run --arch x86_64 --kernel-cli '- lkmc_eval="ifup -a;wget -S google.com;poweroff;"'
....
===== Test userland in full system ===== Test userland in full system
Run all userland tests from inside full system simulation (i.e. not <<user-mode-simulation>>): Run all userland tests from inside full system simulation (i.e. not <<user-mode-simulation>>):
@@ -12031,7 +12021,26 @@ To debug GDB problems on gem5, you might want to enable the following <<gem5-tra
; ;
.... ....
====== Test GDB Linux kernel ===== Magic failure string
Since there is no standardized exit status concept that works across all emulators for full system, we just parse the terminal output for a magic failure string to check if tests failed.
If a full system simulation outputs a line containing only exactly the magic string:
....
lkmc_test_fail
....
to the terminal, then our run scripts detect that and exit with status `1`.
This magic output string is notably used by:
* the `common_assert_fail()` function, which is used by <<baremetal-tests>>
* link:rootfs_overlay/test_fail.sh[], which is used by <<test-userland-in-full-system>>
=== Non-automated tests
==== Test GDB Linux kernel
For the Linux kernel, do the following manual tests for now. For the Linux kernel, do the following manual tests for now.
@@ -12054,23 +12063,14 @@ Then proceed to do the following tests:
* `/count.sh` and `break __x64_sys_write` * `/count.sh` and `break __x64_sys_write`
* `insmod /timer.ko` and `break lkmc_timer_callback` * `insmod /timer.ko` and `break lkmc_timer_callback`
==== Magic failure string ==== Test the Internet
Since there is no standardized exit status concept that works across all emulators for full system, we just parse the terminal output for a magic failure string to check if tests failed. You should also test that the Internet works:
If a full system simulation outputs a line containing only exactly the magic string:
.... ....
lkmc_test_fail ./run --arch x86_64 --kernel-cli '- lkmc_eval="ifup -a;wget -S google.com;poweroff;"'
.... ....
to the terminal, then our run scripts detect that and exit with status `1`.
This magic output string is notably used by:
* the `common_assert_fail()` function, which is used by <<baremetal-tests>>
* link:rootfs_overlay/test_fail.sh[], which is used by <<test-userland-in-full-system>>
=== Bisection === Bisection
When updating the Linux kernel, QEMU and gem5, things sometimes break. When updating the Linux kernel, QEMU and gem5, things sometimes break.
@@ -12174,37 +12174,62 @@ This can be used to check the determinism of:
=== Release === Release
Create a release: ==== Release procedure
Ensure that the <<automated-tests>> are passing on a clean build:
.... ....
mv out out.bak
./build-test --size 3 && ./test --size 3
....
The clean build is necessary as it generates clean images since <<remove-buildroot-packages,it is not possible to remove Buildroot packages>>
Run all tests in <<non-automated-tests>> just QEMU x86_64 and QEMU aarch64.
TODO: not working currently, so skipped: Ensure that the <<benchmark-this-repo,benchmarks>> look fine:
....
./bench-all -A
....
Create a release candidate and upload it:
....
git tag -a -m '' v3.0-rc1
git push --follow-tags
./release-zip --all-archs
# export LKMC_GITHUB_TOKEN=<your-token>
./release-upload
....
Do an out-of-box testing for the release candidate:
....
cd ..
git clone https://github.com/cirosantilli/linux-kernel-module-cheat linux-kernel-module-cheat-release git clone https://github.com/cirosantilli/linux-kernel-module-cheat linux-kernel-module-cheat-release
cd linux-kernel-module-cheat-release cd linux-kernel-module-cheat-release
# export LKMC_GITHUB_TOKEN=<your-token>
./release
.... ....
Source: link:release[] Test <<prebuilt>>, and then through all of the <<getting-started>> section in order.
This scripts does: Once everything looks fine, publish the release with:
* configure ....
* build git tag -a v3.0
* package with <<release-zip>> # Describe the release int the tag message.
* creates a tag of form `sha-<sha>` git push --follow-tags
* upload to GitHub with link:release-create-github[] ./release-zip --all-archs
# export LKMC_GITHUB_TOKEN=<your-token>
Cloning a clean tree is ideal as it generates clean images since <<remove-buildroot-packages,it is not possible to remove Buildroot packages>> ./release-upload
....
This should in particular enable to easily update <<prebuilt>>.
TODO also run tests and only release if they pass.
==== release-zip ==== release-zip
Create a zip containing all files required for <<prebuilt>>: Create a zip containing all files required for <<prebuilt>>:
.... ....
./build release && ./release-zip ./build --all-archs release && ./release-zip --all-archs
.... ....
Source: link:release-zip[] Source: link:release-zip[]
@@ -12217,7 +12242,14 @@ echo "$(./getvar release_zip_file)"
which you can then upload somewhere. which you can then upload somewhere.
For example, you can create or update a GitHub release and upload automatically with: ==== release-upload
After:
* running <<release-zip>>
* creating and pushing a tag to GitHub
you can upload the release to GitHub automatically with:
.... ....
# export LKMC_GITHUB_TOKEN=<your-token> # export LKMC_GITHUB_TOKEN=<your-token>
@@ -12226,9 +12258,14 @@ For example, you can create or update a GitHub release and upload automatically
Source: link:release-upload[] Source: link:release-upload[]
The HEAD of the local repository must be on top of a tag that has been pushed for this to work.
Create `LKMC_GITHUB_TOKEN` under: https://github.com/settings/tokens/new and save it to your `.bashrc`. Create `LKMC_GITHUB_TOKEN` under: https://github.com/settings/tokens/new and save it to your `.bashrc`.
TODO: generalize that so that people can upload to their forks. The implementation of this script is described at:
* https://stackoverflow.com/questions/5207269/how-to-release-a-build-artifact-asset-on-github-with-a-script/52354732#52354732
* https://stackoverflow.com/questions/38153418/can-someone-give-a-python-requests-example-of-uploading-a-release-asset-in-githu/52354681#52354681
=== Design rationale === Design rationale

View File

@@ -15,7 +15,8 @@ class Main(common.BuildCliFunction):
def __init__(self): def __init__(self):
super().__init__( super().__init__(
description='''\ description='''\
Run Linux on an emulator Build Buildroot. This includes, notably: the userland GCC cross-toolchain,
and the root filesystem.
''') ''')
self.add_argument( self.add_argument(
'--build-linux', default=False, '--build-linux', default=False,

View File

@@ -1,4 +1,8 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# We want to move this into ./build. The only reason we haven't is that
# what to build depends on --size, which ./build does not support right now.
# The best way to solve this is to move the dependency checking into the run
# scripts, which will take a while to refactor.
set -eu set -eu
test_size=1 test_size=1
while [ $# -gt 0 ]; do while [ $# -gt 0 ]; do

View File

@@ -209,10 +209,12 @@ class CliFunction:
argument = _Argument(*args, **kwargs) argument = _Argument(*args, **kwargs)
self._arguments[argument.key] = argument self._arguments[argument.key] = argument
def cli(self, cli_args=None): def cli_noexit(self, cli_args=None):
''' '''
Call the function from the CLI. Parse command line arguments Call the function from the CLI. Parse command line arguments
to get all arguments. to get all arguments.
:return: the return of main
''' '''
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description=self._description, description=self._description,
@@ -233,12 +235,17 @@ class CliFunction:
args = parser.parse_args(args=cli_args) args = parser.parse_args(args=cli_args)
return self._do_main(vars(args)) return self._do_main(vars(args))
def cli_exit(self, *args, **kwargs): def cli(self, *args, **kwargs):
''' '''
Same as cli, but also exit the program with status equal to the return value of main. Same as cli, but also exit the program with status equal to the return value of main.
main must return an integer for this to be used. main must return an integer for this to be used.
None is considered 0.
''' '''
sys.exit(self.cli(*args, **kwargs)) exit_status = self.cli_noexit(*args, **kwargs)
if exit_status is None:
exit_status = 0
sys.exit(exit_status)
def get_cli(self, **kwargs): def get_cli(self, **kwargs):
''' '''
@@ -324,6 +331,7 @@ amazing function!
self.add_argument('pos-mandatory', help='Help for pos-mandatory', type=int), self.add_argument('pos-mandatory', help='Help for pos-mandatory', type=int),
self.add_argument('pos-optional', default=0, help='Help for pos-optional', type=int), self.add_argument('pos-optional', default=0, help='Help for pos-optional', type=int),
self.add_argument('args-star', help='Help for args-star', nargs='*'), self.add_argument('args-star', help='Help for args-star', nargs='*'),
def main(self, **kwargs): def main(self, **kwargs):
del kwargs['_args_given'] del kwargs['_args_given']
return kwargs return kwargs
@@ -348,7 +356,7 @@ amazing function!
} }
# Default CLI call with programmatic CLI arguments. # Default CLI call with programmatic CLI arguments.
out = one_cli_function.cli(['1']) out = one_cli_function.cli_noexit(['1'])
assert out == default assert out == default
# asdf # asdf
@@ -367,7 +375,7 @@ amazing function!
if '--bool-true': if '--bool-true':
out = one_cli_function(pos_mandatory=1, bool_true=False) out = one_cli_function(pos_mandatory=1, bool_true=False)
cli_out = one_cli_function.cli(['--no-bool-true', '1']) cli_out = one_cli_function.cli_noexit(['--no-bool-true', '1'])
assert out == cli_out assert out == cli_out
assert out['bool_true'] == False assert out['bool_true'] == False
out['bool_true'] = default['bool_true'] out['bool_true'] = default['bool_true']
@@ -375,7 +383,7 @@ amazing function!
if '--bool-false': if '--bool-false':
out = one_cli_function(pos_mandatory=1, bool_false=True) out = one_cli_function(pos_mandatory=1, bool_false=True)
cli_out = one_cli_function.cli(['--bool-false', '1']) cli_out = one_cli_function.cli_noexit(['--bool-false', '1'])
assert out == cli_out assert out == cli_out
assert out['bool_false'] == True assert out['bool_false'] == True
out['bool_false'] = default['bool_false'] out['bool_false'] = default['bool_false']
@@ -394,7 +402,7 @@ amazing function!
# --dest # --dest
out = one_cli_function(pos_mandatory=1, custom_dest='a') out = one_cli_function(pos_mandatory=1, custom_dest='a')
cli_out = one_cli_function.cli(['--dest', 'a', '1']) cli_out = one_cli_function.cli_noexit(['--dest', 'a', '1'])
assert out == cli_out assert out == cli_out
assert out['custom_dest'] == 'a' assert out['custom_dest'] == 'a'
out['custom_dest'] = default['custom_dest'] out['custom_dest'] = default['custom_dest']
@@ -405,7 +413,7 @@ amazing function!
assert out['pos_mandatory'] == 1 assert out['pos_mandatory'] == 1
assert out['pos_optional'] == 2 assert out['pos_optional'] == 2
assert out['args_star'] == ['3', '4'] assert out['args_star'] == ['3', '4']
cli_out = one_cli_function.cli(['1', '2', '3', '4']) cli_out = one_cli_function.cli_noexit(['1', '2', '3', '4'])
assert out == cli_out assert out == cli_out
out['pos_mandatory'] = default['pos_mandatory'] out['pos_mandatory'] = default['pos_mandatory']
out['pos_optional'] = default['pos_optional'] out['pos_optional'] = default['pos_optional']
@@ -414,21 +422,21 @@ amazing function!
# Star # Star
out = one_cli_function(append=['1', '2'], pos_mandatory=1) out = one_cli_function(append=['1', '2'], pos_mandatory=1)
cli_out = one_cli_function.cli(['--append', '1', '--append', '2', '1']) cli_out = one_cli_function.cli_noexit(['--append', '1', '--append', '2', '1'])
assert out == cli_out assert out == cli_out
assert out['append'] == ['1', '2'] assert out['append'] == ['1', '2']
out['append'] = default['append'] out['append'] = default['append']
assert out == default assert out == default
# Force a boolean value set on the config to be False on CLI. # Force a boolean value set on the config to be False on CLI.
assert one_cli_function.cli(['--no-bool-cli', '1'])['bool_cli'] is False assert one_cli_function.cli_noexit(['--no-bool-cli', '1'])['bool_cli'] is False
# Pick another config file. # Pick another config file.
assert one_cli_function.cli(['--config-file', 'cli_function_test_config_2.py', '1'])['bool_cli'] is False assert one_cli_function.cli_noexit(['--config-file', 'cli_function_test_config_2.py', '1'])['bool_cli'] is False
# Extra config file for '*'. # Extra config file for '*'.
assert one_cli_function.cli(['--config-file', 'cli_function_test_config_2.py', '1', '2', '3', '4'])['args_star'] == ['3', '4'] assert one_cli_function.cli_noexit(['--config-file', 'cli_function_test_config_2.py', '1', '2', '3', '4'])['args_star'] == ['3', '4']
assert one_cli_function.cli(['--config-file', 'cli_function_test_config_2.py', '1', '2'])['args_star'] == ['asdf', 'qwer'] assert one_cli_function.cli_noexit(['--config-file', 'cli_function_test_config_2.py', '1', '2'])['args_star'] == ['asdf', 'qwer']
# get_cli # get_cli
assert one_cli_function.get_cli(pos_mandatory=1, asdf='B') == [('--asdf', 'B'), ('--bool-cli',), ('1',)] assert one_cli_function.get_cli(pos_mandatory=1, asdf='B') == [('--asdf', 'B'), ('--bool-cli',), ('1',)]

129
common.py
View File

@@ -180,7 +180,14 @@ Implied by --quiet.
'-q', '--quiet', default=False, '-q', '--quiet', default=False,
help='''\ help='''\
Don't print anything to stdout, except if it is part of an interactive terminal. Don't print anything to stdout, except if it is part of an interactive terminal.
TODO: implement fully, some stuff is escaping currently. TODO: implement fully, some stuff is escaping it currently.
'''
)
self.add_argument(
'--quit-on-failure',
default=True,
help='''\
Stop running at the first failed test.
''' '''
) )
self.add_argument( self.add_argument(
@@ -756,8 +763,8 @@ Valid emulators: {}
def get_toolchain_tool(self, tool, allowed_toolchains=None): def get_toolchain_tool(self, tool, allowed_toolchains=None):
return '{}-{}'.format(self.get_toolchain_prefix(tool, allowed_toolchains), tool) return '{}-{}'.format(self.get_toolchain_prefix(tool, allowed_toolchains), tool)
@staticmethod
def github_make_request( def github_make_request(
self,
authenticate=False, authenticate=False,
data=None, data=None,
extra_headers=None, extra_headers=None,
@@ -775,7 +782,7 @@ Valid emulators: {}
if url_params is not None: if url_params is not None:
path += '?' + urllib.parse.urlencode(url_params) path += '?' + urllib.parse.urlencode(url_params)
request = urllib.request.Request( request = urllib.request.Request(
'https://' + subdomain + '.github.com/repos/' + github_repo_id + path, 'https://' + subdomain + '.github.com/repos/' + self.env['github_repo_id'] + path,
headers=headers, headers=headers,
data=data, data=data,
**extra_request_args **extra_request_args
@@ -816,7 +823,10 @@ Valid emulators: {}
def main(self, *args, **kwargs): def main(self, *args, **kwargs):
''' '''
Time the main of the derived class. Run timed_main across all selected archs and emulators.
:return: if any of the timed_mains exits non-zero and non-null,
return that. Otherwise, return 0.
''' '''
env = kwargs.copy() env = kwargs.copy()
self.input_args = env.copy() self.input_args = env.copy()
@@ -830,38 +840,52 @@ Valid emulators: {}
real_emulators = consts['all_long_emulators'] real_emulators = consts['all_long_emulators']
else: else:
real_emulators = env['emulators'] real_emulators = env['emulators']
for emulator in real_emulators: return_value = 0
for arch in real_archs: class GetOutOfLoop(Exception): pass
if arch in env['arch_short_to_long_dict']: try:
arch = env['arch_short_to_long_dict'][arch] ret = self.setup()
if self.is_arch_supported(arch): if ret is not None and ret != 0:
if not env['dry_run']: return_value = ret
start_time = time.time() raise GetOutOfLoop()
env['arch'] = arch for emulator in real_emulators:
env['archs'] = [arch] for arch in real_archs:
env['_args_given']['archs'] = True if arch in env['arch_short_to_long_dict']:
env['all_archs'] = False arch = env['arch_short_to_long_dict'][arch]
env['emulator'] = emulator if self.is_arch_supported(arch):
env['emulators'] = [emulator] if not env['dry_run']:
env['_args_given']['emulators'] = True start_time = time.time()
env['all_emulators'] = False env['arch'] = arch
self.env = env.copy() env['archs'] = [arch]
self._init_env(self.env) env['_args_given']['archs'] = True
self.sh = shell_helpers.ShellHelpers( env['all_archs'] = False
dry_run=self.env['dry_run'], env['emulator'] = emulator
quiet=self.env['quiet'], env['emulators'] = [emulator]
) env['_args_given']['emulators'] = True
ret = self.timed_main() env['all_emulators'] = False
if not env['dry_run']: self.env = env.copy()
end_time = time.time() self._init_env(self.env)
self.ellapsed_seconds = end_time - start_time self.sh = shell_helpers.ShellHelpers(
self.print_time(self.ellapsed_seconds) dry_run=self.env['dry_run'],
if ret is not None and ret != 0: quiet=self.env['quiet'],
return ret )
elif not real_all_archs: ret = self.timed_main()
raise Exception('Unsupported arch for this action: ' + arch) if not env['dry_run']:
self.teardown() end_time = time.time()
return 0 self.ellapsed_seconds = end_time - start_time
self.print_time(self.ellapsed_seconds)
if ret is not None and ret != 0:
return_value = ret
if self.env['quit_on_failure']:
raise GetOutOfLoop()
elif not real_all_archs:
raise Exception('Unsupported arch for this action: ' + arch)
except GetOutOfLoop:
pass
ret = self.teardown()
if ret is not None and ret != 0:
return_value = ret
return return_value
def make_build_dirs(self): def make_build_dirs(self):
os.makedirs(self.env['buildroot_build_build_dir'], exist_ok=True) os.makedirs(self.env['buildroot_build_build_dir'], exist_ok=True)
@@ -972,17 +996,29 @@ Valid emulators: {}
self.env['userland_build_ext'], self.env['userland_build_ext'],
) )
def teardown(self): def setup(self):
''' '''
Gets run just once after looping over all archs and emulators. Similar to timed_main, but gets run only once for all --arch and --emulator,
before timed_main.
Different from __init__, since at this point env has already been calculated,
so variables that don't depend on --arch or --emulator can be used.
''' '''
pass pass
def timed_main(self): def timed_main(self):
''' '''
Main action of the derived class. Main action of the derived class.
Gets run once for every --arch and every --emulator.
''' '''
raise NotImplementedError() pass
def teardown(self):
'''
Similar to setup, but run after timed_main.
'''
pass
class BuildCliFunction(LkmcCliFunction): class BuildCliFunction(LkmcCliFunction):
''' '''
@@ -1070,13 +1106,6 @@ class TestCliFunction(LkmcCliFunction):
kwargs['defaults'] = defaults kwargs['defaults'] = defaults
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.tests = [] self.tests = []
self.add_argument(
'--fail-early',
default=True,
help='''\
Stop running at the first failed test.
'''
)
def run_test(self, run_obj, run_args=None, test_id=None): def run_test(self, run_obj, run_args=None, test_id=None):
''' '''
@@ -1109,7 +1138,7 @@ Stop running at the first failed test.
test_result = TestResult.PASS test_result = TestResult.PASS
else: else:
test_result = TestResult.FAIL test_result = TestResult.FAIL
if self.env['fail_early']: if self.env['quit_on_failure']:
self.log_error('Test failed') self.log_error('Test failed')
sys.exit(1) sys.exit(1)
self.log_info('test_result {}'.format(test_result.name)) self.log_info('test_result {}'.format(test_result.name))
@@ -1121,6 +1150,9 @@ Stop running at the first failed test.
self.tests.append(Test(test_id_string, test_result, ellapsed_seconds)) self.tests.append(Test(test_id_string, test_result, ellapsed_seconds))
def teardown(self): def teardown(self):
'''
:return: 1 if any test failed, 0 otherwise
'''
self.log_info('Test result summary') self.log_info('Test result summary')
passes = [] passes = []
fails = [] fails = []
@@ -1136,4 +1168,5 @@ Stop running at the first failed test.
for test in fails: for test in fails:
self.log_info(test) self.log_info(test)
self.log_error('A test failed') self.log_error('A test failed')
sys.exit(1) return 1
return 0

View File

@@ -20,4 +20,4 @@ class Main(common.LkmcCliFunction):
]) ])
if __name__ == '__main__': if __name__ == '__main__':
Main().cli_exit() Main().cli()

2
getvar
View File

@@ -5,7 +5,7 @@ import common
class Main(common.LkmcCliFunction): class Main(common.LkmcCliFunction):
def __init__(self): def __init__(self):
super().__init__( super().__init__(
defaults={ defaults = {
'print_time': False, 'print_time': False,
}, },
description='''\ description='''\

View File

@@ -26,4 +26,4 @@ Convert a QEMU `-trace exec_tb` to text form.
) )
if __name__ == '__main__': if __name__ == '__main__':
Main().cli_exit() Main().cli()

31
release
View File

@@ -1,31 +0,0 @@
#!/usr/bin/env python3
'''
https://upload.com/cirosantilli/linux-kernel-module-cheat#release
'''
import imp
import os
import subprocess
import time
import common
release_zip = imp.load_source('release_zip', os.path.join(kwargs['root_dir'], 'release-zip'))
release_upload = imp.load_source('release_upload', os.path.join(kwargs['root_dir'], 'release-upload'))
start_time = time.time()
# TODO factor those out so we don't redo the same thing multiple times.
# subprocess.check_call([os.path.join(kwargs['root_dir'], 'test')])
# subprocess.check_call([os.path.join(kwargs['root_dir'], 'bench-all', '-A', '-u'])
# A clean release requires a full rebuild unless we hack it :-(
# We can't just use our current build as it contains packages we've
# installed in random experiments. And with EXT2: we can't easily
# know what the smallest root filesystem size is and use it either...
# https://stackoverflow.com/questions/47320800/how-to-clean-only-target-in-buildroot
subprocess.check_call([os.path.join(kwargs['root_dir'], 'build'), '--all-archs', '--download-dependencies', 'release'])
release_zip.main()
subprocess.check_call(['git', 'push'])
release_upload.main()
end_time = time.time()
self.print_time(end_time - start_time)

View File

@@ -1,79 +1,84 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
'''
Usage: https://github.com/cirosantilli/linux-kernel-module-cheat#release-zip
Implementation:
* https://stackoverflow.com/questions/5207269/how-to-release-a-build-artifact-asset-on-github-with-a-script/52354732#52354732
* https://stackoverflow.com/questions/38153418/can-someone-give-a-python-requests-example-of-uploading-a-release-asset-in-githu/52354681#52354681
'''
import json import json
import os import os
import subprocess
import sys import sys
import urllib.error import urllib.error
import common import common
from shell_helpers import LF from shell_helpers import LF
def main(): class Main(common.LkmcCliFunction):
repo = kwargs['github_repo_id'] def __init__(self):
tag = 'sha-{}'.format(kwargs['sha']) super().__init__(
upload_path = kwargs['release_zip_file'] description='''\
https://github.com/cirosantilli/linux-kernel-module-cheat#release-upload
# Check the release already exists. ''',
try:
_json = self.github_make_request(path='/releases/tags/' + tag)
except urllib.error.HTTPError as e:
if e.code == 404:
release_exists = False
else:
raise e
else:
release_exists = True
release_id = _json['id']
# Create release if not yet created.
if not release_exists:
_json = self.github_make_request(
authenticate=True,
data=json.dumps({
'tag_name': tag,
'name': tag,
'prerelease': True,
}).encode(),
path='/releases'
) )
release_id = _json['id']
asset_name = os.path.split(upload_path)[1] def timed_main(self):
# https://stackoverflow.com/questions/3404936/show-which-git-tag-you-are-on
tag = subprocess.check_output([
'git',
'describe',
'--exact-match',
'--tags'
]).decode().rstrip()
upload_path = self.env['release_zip_file']
# Clear the prebuilts for a upload. # Check the release already exists.
_json = self.github_make_request( try:
path=('/releases/' + str(release_id) + '/assets'), _json = self.github_make_request(path='/releases/tags/' + tag)
) except urllib.error.HTTPError as e:
for asset in _json: if e.code == 404:
if asset['name'] == asset_name: release_exists = False
else:
raise e
else:
release_exists = True
release_id = _json['id']
# Create release if not yet created.
if not release_exists:
_json = self.github_make_request( _json = self.github_make_request(
authenticate=True, authenticate=True,
path=('/releases/assets/' + str(asset['id'])), data=json.dumps({
method='DELETE', 'tag_name': tag,
'name': tag,
'prerelease': True,
}).encode(),
path='/releases'
) )
break release_id = _json['id']
# Upload the prebuilt. asset_name = os.path.split(upload_path)[1]
with open(upload_path, 'br') as myfile:
content = myfile.read() # Clear the prebuilts for a upload.
_json = self.github_make_request( _json = self.github_make_request(
authenticate=True, path=('/releases/' + str(release_id) + '/assets'),
data=content, )
extra_headers={'Content-Type': 'application/zip'}, for asset in _json:
path=('/releases/' + str(release_id) + '/assets'), if asset['name'] == asset_name:
subdomain='uploads', _json = self.github_make_request(
url_params={'name': asset_name}, authenticate=True,
) path=('/releases/assets/' + str(asset['id'])),
method='DELETE',
)
break
# Upload the prebuilt.
self.log_info('Uploading the release, this may take several seconds / a few minutes.')
with open(upload_path, 'br') as myfile:
content = myfile.read()
_json = self.github_make_request(
authenticate=True,
data=content,
extra_headers={'Content-Type': 'application/zip'},
path=('/releases/' + str(release_id) + '/assets'),
subdomain='uploads',
url_params={'name': asset_name},
)
if __name__ == '__main__': if __name__ == '__main__':
main() Main().cli()

View File

@@ -1,26 +1,35 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
'''
https://github.com/cirosantilli/linux-kernel-module-cheat#release-zip
'''
import os import os
import subprocess
import zipfile import zipfile
import common import common
from shell_helpers import LF
def main(): class Main(common.LkmcCliFunction):
os.makedirs(self.env['release_dir'], exist_ok=True) def __init__(self):
if os.path.exists(self.env['release_zip_file']): super().__init__(
description='''\
https://github.com/cirosantilli/linux-kernel-module-cheat#release-zip
''',
defaults = {
'print_time': False,
}
)
self.qcow2s_linux_images = []
def timed_main(self):
self.qcow2s_linux_images.append((self.env['qcow2_file'], self.env['linux_image']))
def teardown(self):
os.makedirs(self.env['release_dir'], exist_ok=True)
self.sh.rmrf(self.env['release_zip_file']) self.sh.rmrf(self.env['release_zip_file'])
zipf = zipfile.ZipFile(self.env['release_zip_file'], 'w', zipfile.ZIP_DEFLATED) self.log_info('Creating zip: ' + self.env['release_zip_file'])
for arch in self.env['all_long_archs']: with zipfile.ZipFile(self.env['release_zip_file'], 'w', zipfile.ZIP_DEFLATED) as zipf:
self.setup(common.get_argparse(default_args={'arch': arch})) for qcow2, linux_image in self.qcow2s_linux_images:
zipf.write(self.env['qcow2_file'], arcname=os.path.relpath(self.env['qcow2_file'], self.env['root_dir'])) self.log_info('Adding file: ' + qcow2)
zipf.write(self.env['linux_image'], arcname=os.path.relpath(self.env['linux_image'], self.env['root_dir'])) zipf.write(qcow2, arcname=os.path.relpath(qcow2, self.env['root_dir']))
zipf.close() self.log_info('Adding file: ' + linux_image)
zipf.write(linux_image, arcname=os.path.relpath(linux_image, self.env['root_dir']))
if __name__ == '__main__': if __name__ == '__main__':
main() Main().cli()

2
run
View File

@@ -641,4 +641,4 @@ Run QEMU with VNC instead of the default SDL. Connect to it with:
return exit_status return exit_status
if __name__ == '__main__': if __name__ == '__main__':
Main().cli_exit() Main().cli()

4
test
View File

@@ -8,7 +8,7 @@ class Main(common.TestCliFunction):
def __init__(self): def __init__(self):
super().__init__( super().__init__(
description='''\ description='''\
Run all tests in one go. https://github.com/cirosantilli/linux-kernel-module-cheat#automated-tests
''' '''
) )
self.add_argument( self.add_argument(
@@ -35,5 +35,5 @@ Size of the tests to run. Scale:
self.run_test(self.import_path_main('test-gdb'), run_args, 'test-gdb') self.run_test(self.import_path_main('test-gdb'), run_args, 'test-gdb')
if __name__ == '__main__': if __name__ == '__main__':
Main().cli_exit() Main().cli()

View File

@@ -89,5 +89,4 @@ See ./test --help for --size.
# self._bench(gem5_script='biglittle') # self._bench(gem5_script='biglittle')
if __name__ == '__main__': if __name__ == '__main__':
Main().cli_exit() Main().cli()

View File

@@ -7,7 +7,11 @@ import common
class Main(common.TestCliFunction): class Main(common.TestCliFunction):
def __init__(self): def __init__(self):
super().__init__() super().__init__(
description='''\
https://github.com/cirosantilli/linux-kernel-module-cheat#test-gdb
'''
)
self.add_argument( self.add_argument(
'tests', 'tests',
nargs='*', nargs='*',