Conversation
|
|
||
|
|
||
| def shift_images( | ||
| images, |
There was a problem hiding this comment.
Should probably be moved to core/imaging_utils.py
There was a problem hiding this comment.
Good point - I put it here for quick iteration, haven't moved it yet.
|
When looking at experimental data with dead pixels (and not preprocessed data like above), be sure to filter them before Can fix with something like this: em.visualization.show_2d(
mask,
vmax = 1.0,
)
mask = ds[0].dp_mean.array > 1e4
em.visualization.show_2d(
mask,
vmax = 1.0,
)
for d in ds:
d.median_filter_masked_pixels(mask)Attached is a notebook for a working example. Just replace the file names and path to run. maped_test.ipynb |
|
I tried this code with my Si membrane MAPED data. Notebook attached here: [notebook] (https://github.com/user-attachments/files/25058505/HB_pr169_maped_test_nb.ipynb) This dataset has a lot of misalignment between the diffraction patterns in a single tilt, which this code does not deal with. I think we should add this functionality in the MAPED code base or the 4DSTEM dataset class (if it is not already).
Even with these misaligned DPs, the real space alignment worked well, I just needed around 15 alignment iterations. |
|
Thank you both for the testing! @henrygbell I should have clarified - MAPED stores the global diffraction shift, it doesn't apply shifts to the list of
So I think it's working as expected (though not handling the de-scan yet). |
henrygbell
left a comment
There was a problem hiding this comment.
Testing this code on my Si MAPED dataset, it works very well and required zero tuning.
I do think some minor changes can be made to improve the code, see my comments below.
I think next steps I can take are porting it to torch to speed up the merge_datasets method for large datasets and correcting for de-scan.
| Stores | ||
| ------ | ||
| self.diffraction_origins : np.ndarray | ||
| Array of shape (n, 2) with integer (row, col) origins. |
There was a problem hiding this comment.
| Array of shape (n, 2) with integer (row, col) origins. | |
| Array of shape (n, 2) with integer (row, col) origins; n = len(datasets). |
| Stores | ||
| ------ | ||
| self.scales : np.ndarray | ||
| Per-dataset scaling factors (n,). |
There was a problem hiding this comment.
| Per-dataset scaling factors (n,). | |
| Per-dataset scaling factors (n,); n = len(datasets) |
| shift_rc, G_shift = weighted_cross_correlation_shift( | ||
| im_ref=G_ref, | ||
| im=G, | ||
| weight_real=im_weight * 0.0 + 1.0, |
There was a problem hiding this comment.
im_weight is not used here. Is that on purpose?
| else: | ||
| dp_arr = np.asarray(dp.array if hasattr(dp, "array") else dp) | ||
|
|
||
| arr = np.asarray(d.array) |
There was a problem hiding this comment.
This is a double conversion, I think one of them is not needed.
| else: | ||
| dp_arr = np.asarray(dp.array if hasattr(dp, "array") else dp) | ||
|
|
||
| arr = np.asarray(d.array) |
There was a problem hiding this comment.
| arr = np.asarray(d.array) | |
| arr = np.asarray(d) |
|
|
||
| H, W = np.asarray(self.dp_mean[0]).shape | ||
|
|
||
| w = tukey(H, alpha=2.0 * float(edge_blend) / float(H))[:, None] * tukey( |
There was a problem hiding this comment.
I think we should clamp the Tukey alphas as it is done in other parts of the code.
| raise RuntimeError("Run diffraction_origin() first so self.diffraction_origins exists.") | ||
|
|
||
| H, W = np.asarray(self.dp_mean[0]).shape | ||
|
|
There was a problem hiding this comment.
| alpha = min(1.0, 2.0 * float(edge_blend) / float(H)) |
|
|
||
| H, W = np.asarray(self.dp_mean[0]).shape | ||
|
|
||
| w = tukey(H, alpha=2.0 * float(edge_blend) / float(H))[:, None] * tukey( |
There was a problem hiding this comment.
| w = tukey(H, alpha=2.0 * float(edge_blend) / float(H))[:, None] * tukey( | |
| w = tukey(H, alpha=alpha)[:, None] * tukey( |
| H, W = np.asarray(self.dp_mean[0]).shape | ||
|
|
||
| w = tukey(H, alpha=2.0 * float(edge_blend) / float(H))[:, None] * tukey( | ||
| W, alpha=2.0 * float(edge_blend) / float(W) |
There was a problem hiding this comment.
| W, alpha=2.0 * float(edge_blend) / float(W) | |
| W, alpha=alpha, |
| return dataset_merged | ||
|
|
||
|
|
||
| def shift_images( |
There was a problem hiding this comment.
I think the naming of this function is a bit misleading because it shifts a stack of images -- as well as blending them. It's obvious if you glance at the docstring, but I would still change the name to "shift_blend_images" or something like this so there's no confusion. Another way to improve this and make it more general for use elsewhere would be to make flags for return_stack and/or return_blend.



What does this PR do?
This pull request adds support for multi angle precession electron diffraction (MAPED) data processing, described in this paper: https://arxiv.org/abs/2506.11327
Typical usage is something like:
Example workflow:
Notebook example applied to this data from @smribet https://drive.google.com/drive/folders/1EtNWeWZSO8TZ7Qibak2SVxQiAC9IsCCb?usp=sharing
maped01.ipynb
Instructions for reviewers
Please check this code for accuracy and test on other MAPED datasets. Likely the diffraction space origin finding will need some work - it uses a "real space biased cross correlation" which its the best solution I've come up with so far (pytest added for that function).
TODO