Summary: | devel/cmake-core: segfault on checking MPI implementation when GCC is used as the compiler | ||||||
---|---|---|---|---|---|---|---|
Product: | Ports & Packages | Reporter: | Vedran Miletic <vedran> | ||||
Component: | Individual Port(s) | Assignee: | freebsd-kde (group) <kde> | ||||
Status: | Closed Overcome By Events | ||||||
Severity: | Affects Only Me | CC: | adridg, thierry | ||||
Priority: | --- | Flags: | bugzilla:
maintainer-feedback?
(kde) |
||||
Version: | Latest | ||||||
Hardware: | Any | ||||||
OS: | Any | ||||||
Attachments: |
|
Description
Vedran Miletic
2023-11-18 11:36:17 UTC
This still occurs with CMake 3.27.7 installed via pkg. Vedran, you might want to remove the core dump -- it contains some paths and SSH details that you might not want to share. IPv4 addresses, nothing too personal, though. From the name of the executable, and the size of the core, and strings in the core like `/home/vedranm/gaseri/dev/gromacs/build/CMakeFiles/CMakeScratch/TryCompile-VecQz7/cmTC_ed0ef` I'm going to conclude that this is not actually **cmake** segfaulting, but one of the programs that it builds during CMake-time. This is, to some extent, expected: CMake tries all kinds of stuff during configure, and some of those tests crash. Does the CMake step complete successfully? That's the important thing, not if there's a core from along the way. I can't immediately reproduce this locally, either: in a 13.2 poudriere environment, where I built OpenMPI 4 and fftw3 from ports (with clang, I suppose) and gcc13 (different from your experiment!) then gromacs can be configured. I needed an extra flag for CMake because fftw was not found -- possibly a mismatch because of compilers. Forcing it to use the slow bundled fftw got it through the CMake step and allowed building gromacs. I think we need a much more detailed reproduction scenario (one that starts with "Install 14-RELEASE, then pkg install gcc12 cmake, ...") to investigate further, but like I said: I think this is not actually a problem. I don't think I have seen this happen with OpenMPI 5 so I wouldn't consider it an issue anymore. Regarding core dump, I agree, but I don't have privileges to remove it or mark it private. If you can do that, please do. |