Implement checks for fulfilment of algorithm requirements in the OsipiBase class#96
Implement checks for fulfilment of algorithm requirements in the OsipiBase class#96harshithasudhakar wants to merge 16 commits intoOSIPI:mainfrom
Conversation
|
Thanks for this. It's a good start. I'd like some feedback from @IvanARashid on this too. Basically I have one main suggestion for making it easier to maintain. It would also be great to get a test that goes through the algorithms and somehow verifies the validators are running. |
|
Also looks like tests are failing because of this. I think it's too strict, or not all the algorithms are acting in quite the standard way. |
|
Would be good if the check error messages also prints both the requirement and the current input. It would make it clearer for us why the tests are currently failing |
|
Sorry for the delay, I've made the changes now. Please let me know if there are any more changes I should make. |
|
Still failing tests. |
etpeterson
left a comment
There was a problem hiding this comment.
The tests are still failing because of the checks you added. For example here: https://github.com/OSIPI/TF2.4_IVIM-MRI_CodeCollection/actions/runs/14249360771/job/39959244676#step:6:79
Either the checks need to be modified or how the algorithms are called. Probably the algorithm calls aren't consistent or as they should be, but I haven't looked into it. Let me know if you need some help with this.
|
Checking in on this because someone might start work on tests related to this. |
|
Hi @harshithasudhakar, thanks for this. I think the major remaining issue is the failing tests. Are you seeing these? |
etpeterson
left a comment
There was a problem hiding this comment.
The failing tests need to be resolved before merging.
I've fixed them, the test cases should pass now. |
etpeterson
left a comment
There was a problem hiding this comment.
Tests are still failing, sorry.
|
Just checking on this, are you planning on finishing this up? |
oliverchampion
left a comment
There was a problem hiding this comment.
Hey! I was thinking it may make more sense to come up with some "default" values for initial guess, bounds and cut-off when they are missing, rather then give an error.
Now, I personally solve the missing of these values in my wrapper. But it may be better to have the Osipi base to come up with default values that are identical for all algorithms.
| #else: | ||
| #return True | ||
| return True | ||
| if not hasattr(self, "required_thresholds"): |
There was a problem hiding this comment.
I am wondering whether for some attributes, like "required_thresholds", we should add "default" values when tests fail. Like, 200 would be an okay value for most body applications. Then we can do with a warning stating that the default value is used, instead.
src/wrappers/OsipiBase.py
Outdated
| #else: | ||
| #return True | ||
| return True | ||
| if self.required_bounds is False and self.bounds is None: |
There was a problem hiding this comment.
Same for bounds. I think if no bounds are given, but bounds will be used, then we should give some default bounds.
src/wrappers/OsipiBase.py
Outdated
| def osipi_check_required_bvalues(): | ||
| """Minimum number of b-values required""" | ||
| pass | ||
| if self.required_initial_guess is False and self.initial_guess is None: |
There was a problem hiding this comment.
same for initial guess :)
| f"Requires {min_dim}-{max_dim}D." | ||
| ) | ||
|
|
||
| def osipi_check_required_bvalues(self): |
There was a problem hiding this comment.
b-values is another story. Here we cannot give a "default"
|
what is the status of this PR? should we merge/close? |
@oliverchampion Well, some algorithms don't require all parameters. So wouldn't it be best to leave it as |


Describe the changes you have made in this PR
Implemented a centralized validation logic for inputs in the base class.
Link this PR to an issue [optional]
Fixes #45
Checklist