It seems we have everything ready, except #48 - Continuous Integration. @ErikSchierboom said he can help with scripts and GitHub Actions, once we implement automated testing of all exercises at once.
pov: update config
pov: update config
At least 20 Practice Exercises are needed to launch track.
exercise name (so-called "slug") is linked to a description.
task's difficulty is in parentheses. the choice is arbitrary, based on personal reception of exercise description. it's open for discussion/change.
this list is chosen randomly from Exercism's problem database.
it's sorted by difficulty, and this order should be kept also in config.json
.
$ red _tools/generate-practice-exercise.red <exercise-slug>
exercises/practice/<exercise-slug>/<exercise-slug>-test.red
change comments like this, to test example solution:; test-init/limit %exercise-slug.red 1
test-init/limit %.meta/example.red 1
exercises/practice/<exercise-slug>/.meta/example.red
,test-init
function from 1
to how many tests you want to run in <exercise-slug>-test.red
.$ cd exercises/practice/<exercise-slug>
$ red <exercise-slug>-test.red
test-init
line: uncomment solution file, comment example file and change limit
to 1
(second argument).config,json
. If you want, add practices
and prerequisites
concepts. Copy exercise's config to proper position, so that all exercises are sorted from easiest to toughest.Just pushed last ex. from the 20 yay!
pov: first function passing tests
pov: all tests passing
pov: first function passing tests
pov: all tests passing
show test output on error
show test output on error
auto-generated files for sgf-parsing exercise
implement sgf-parsing example
Merge branch 'main' into sgf_parsing
🤖 Sync org-wide files to upstream repo (#61)
More info: https://github.com/exercism/org-wide-files/commit/68ae5ebb2706515f915d6e44814827cb4af06732
🤖 Sync org-wide files to upstream repo (#63)
More info: https://github.com/exercism/org-wide-files/commit/f28daf625fa326fafdcf261db7699377c8d1ed37
add sgf-parsing exercise concepts
includes adding a concept for Red/Parse
update authors
Exercise: sgf parsing
Merge pull request #62 from dander/sgf_parsing
concepts in readme's "contributing" section
remove debugging output
largest-serier-product: various error firing methods
remove loziniak as author from dander's exercises
remove debugging output
largest-serier-product: various error firing methods
remove loziniak as author from dander's exercises
concepts in readme's "contributing" section
There's some work started on it: https://github.com/loziniak/red-1/commit/638491ac7e7de6403d76d0fa9220bdb6ccaf9f55
Probably work on concepts should be moved to separate branch, because we don't want to release this track with empty concepts. We should remove all concepts from directory and config on branch main, as well as references to them from practice exercises' practices and prerequisites lists in track's config.json.
auto-generated files for sgf-parsing exercise
implement sgf-parsing example
Merge branch 'main' into sgf_parsing
add sgf-parsing exercise concepts
includes adding a concept for Red/Parse
update authors
Exercise: sgf parsing
Merge pull request #62 from dander/sgf_parsing
I finally got all the tests working.
I added a fair amount more feedback to the test runner to help indicate what is wrong when tests fail. It could be useful for other exercises as well.
I also added an optional test runner argument for the number of tests to run.
One thing I'm a bit uncertain about is that the escaping rules are a bit peculiar (escaped property
test). I'm not sure how much of it is assumption of other languages that things like newlines will be represented like \n
, but it's a bit awkward for Red. I'm not sure if it would make more sense to convert them to ^/
in the test cases instead...
There are some languages recently added to Exercism, and actively developed. It's good to track their updates: https://github.com/exercism/unison https://github.com/exercism/wasm
Hmm I just found more info in configlet docs , but it suggests, that UUIDs can only be generated by configlet. Perhaps we could just add info about using external tools, and link from everywhere to this page? I can do a PR, but I need an opinion on that before starting to work on it.
Perhaps UUIDs could have their own page in docs, and be linked from all other pages? They cause some confusion, and info on them is spread in documentation, main info on them being in concept exercises section of config.json docs.
There is no central point for concepts. For me it felt natural to just solve exercise examples and look if I could need any new concepts to explain it. So, it seems just as you did with parse
. I have some initial work done to start with basics and evaluation concepts. Do you think about working more on concepts? It's a great feature, perhaps we could add concepts one-by-one. There is a task for it: #37 .
UUIDs can be generated offline by hand, they just need to be unique througout the project. You can use configlet for this, or just any online or sytem tool you prefer. Also, during track unit tests, configlet is used to check for uniqueness of UUIDs, so all errors are caught.