Last updated on 2025-12-26 03:51:34 CET.
| Flavor | Version | Tinstall | Tcheck | Ttotal | Status | Flags |
|---|---|---|---|---|---|---|
| r-devel-linux-x86_64-debian-clang | 0.0.7 | 5.17 | 189.74 | 194.91 | OK | |
| r-devel-linux-x86_64-debian-gcc | 0.0.7 | 3.65 | 147.32 | 150.97 | ERROR | |
| r-devel-linux-x86_64-fedora-clang | 0.0.7 | 9.00 | 348.99 | 357.99 | OK | |
| r-devel-linux-x86_64-fedora-gcc | 0.0.7 | 9.00 | 341.13 | 350.13 | OK | |
| r-devel-windows-x86_64 | 0.0.7 | 7.00 | 277.00 | 284.00 | OK | |
| r-patched-linux-x86_64 | 0.0.7 | 5.60 | 193.11 | 198.71 | OK | |
| r-release-linux-x86_64 | 0.0.7 | 6.01 | 210.09 | 216.10 | OK | |
| r-release-macos-arm64 | 0.0.7 | 1.00 | 56.00 | 57.00 | OK | |
| r-release-macos-x86_64 | 0.0.7 | 4.00 | 259.00 | 263.00 | OK | |
| r-release-windows-x86_64 | 0.0.7 | 7.00 | 277.00 | 284.00 | OK | |
| r-oldrel-macos-arm64 | 0.0.7 | 1.00 | 64.00 | 65.00 | OK | |
| r-oldrel-macos-x86_64 | 0.0.7 | 4.00 | 273.00 | 277.00 | OK | |
| r-oldrel-windows-x86_64 | 0.0.7 | 8.00 | 378.00 | 386.00 | OK |
Version: 0.0.7
Check: tests
Result: ERROR
Running ‘testthat.R’ [57s/189s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
> # https://github.com/Rdatatable/data.table/issues/5658
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mllrnrs)
>
> test_check("mllrnrs")
CV fold: Fold1
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.164 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 8.461 seconds
3) Running FUN 2 times in 2 thread(s)... 1.041 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.32 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 6.19 seconds
3) Running FUN 2 times in 2 thread(s)... 0.57 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.159 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 9.689 seconds
3) Running FUN 2 times in 2 thread(s)... 0.711 seconds
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Saving _problems/test-binary-287.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Saving _problems/test-multiclass-162.R
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Saving _problems/test-multiclass-294.R
CV fold: Fold1
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 4.135 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 0.853 seconds
3) Running FUN 2 times in 2 thread(s)... 0.436 seconds
CV fold: Fold2
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 5.017 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.223 seconds
3) Running FUN 2 times in 2 thread(s)... 0.673 seconds
CV fold: Fold3
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 5.455 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.269 seconds
3) Running FUN 2 times in 2 thread(s)... 0.69 seconds
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 7.35 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 9.088 seconds
3) Running FUN 2 times in 2 thread(s)... 0.765 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.388 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 2.637 seconds
3) Running FUN 2 times in 2 thread(s)... 0.943 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.687 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 14.959 seconds
3) Running FUN 2 times in 2 thread(s)... 0.797 seconds
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
══ Skipped tests (3) ═══════════════════════════════════════════════════════════
• On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5',
'test-multiclass.R:57:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ────────
Error in `xtfrm.data.frame(structure(list(`0` = 0.379858310721837, `1` = 0.620141689278164), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55a12d6f8070>, .data.table.locked = TRUE))`: cannot xtfrm data frames
Backtrace:
▆
1. ├─ranger_optimizer$execute() at test-binary.R:287:5
2. │ └─mlexperiments:::.run_cv(self = self, private = private)
3. │ └─mlexperiments:::.fold_looper(self, private)
4. │ ├─base::do.call(private$cv_run_model, run_args)
5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. │ ├─base::do.call(.cv_run_nested_model, args)
7. │ └─mlexperiments (local) `<fn>`(...)
8. │ └─hparam_tuner$execute(k = self$k_tuning)
9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. │ └─mlexperiments:::.run_optimizer(...)
11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper)
12. │ ├─base::do.call(...)
13. │ └─mlexperiments (local) `<fn>`(...)
14. │ └─base::lapply(...)
15. │ └─mlexperiments (local) FUN(X[[i]], ...)
16. │ ├─base::do.call(FUN, fun_parameters)
17. │ └─mlexperiments (local) `<fn>`(...)
18. │ ├─base::do.call(private$fun_optim_cv, kwargs)
19. │ └─mllrnrs (local) `<fn>`(...)
20. │ ├─base::do.call(ranger_predict, pred_args)
21. │ └─mllrnrs (local) `<fn>`(...)
22. │ └─kdry::mlh_reshape(preds)
23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))]
24. │ └─data.table:::`[.data.table`(...)
25. └─base::which.max(.SD)
26. ├─base::xtfrm(`<dt[,2]>`)
27. └─base::xtfrm.data.frame(`<dt[,2]>`)
── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ──
Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55a12d6f8070>, .data.table.locked = TRUE))`: cannot xtfrm data frames
Backtrace:
▆
1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5
2. │ └─mlexperiments:::.run_cv(self = self, private = private)
3. │ └─mlexperiments:::.fold_looper(self, private)
4. │ ├─base::do.call(private$cv_run_model, run_args)
5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. │ ├─base::do.call(.cv_run_nested_model, args)
7. │ └─mlexperiments (local) `<fn>`(...)
8. │ └─mlexperiments:::.cv_fit_model(...)
9. │ ├─base::do.call(self$learner$predict, pred_args)
10. │ └─mlexperiments (local) `<fn>`(...)
11. │ ├─base::do.call(private$fun_predict, kwargs)
12. │ └─mllrnrs (local) `<fn>`(...)
13. │ └─kdry::mlh_reshape(preds)
14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))]
15. │ └─data.table:::`[.data.table`(...)
16. └─base::which.max(.SD)
17. ├─base::xtfrm(`<dt[,3]>`)
18. └─base::xtfrm.data.frame(`<dt[,3]>`)
── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ──
Error in `xtfrm.data.frame(structure(list(`0` = 0.250160574913025, `1` = 0.124035485088825, `2` = 0.62580394744873), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55a12d6f8070>, .data.table.locked = TRUE))`: cannot xtfrm data frames
Backtrace:
▆
1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5
2. │ └─mlexperiments:::.run_cv(self = self, private = private)
3. │ └─mlexperiments:::.fold_looper(self, private)
4. │ ├─base::do.call(private$cv_run_model, run_args)
5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. │ ├─base::do.call(.cv_run_nested_model, args)
7. │ └─mlexperiments (local) `<fn>`(...)
8. │ └─mlexperiments:::.cv_fit_model(...)
9. │ ├─base::do.call(self$learner$predict, pred_args)
10. │ └─mlexperiments (local) `<fn>`(...)
11. │ ├─base::do.call(private$fun_predict, kwargs)
12. │ └─mllrnrs (local) `<fn>`(...)
13. │ └─kdry::mlh_reshape(preds)
14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))]
15. │ └─data.table:::`[.data.table`(...)
16. └─base::which.max(.SD)
17. ├─base::xtfrm(`<dt[,3]>`)
18. └─base::xtfrm.data.frame(`<dt[,3]>`)
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc