See build failure: http://beefy18.nyi.freebsd.org/data/main-amd64-default/p11c06749049e_s1c64959bff/logs/py38-tensorflow-1.15.5_2.log ld: error: undefined symbol: absl::lts_20210324::Mutex::~Mutex() There are similar issues elsewhere and the issue seems to be with grpc: https://bugs.gentoo.org/817212 https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=7a8b5d3587585121d92565980520507e9eb6e37d Confirmed that downgrading grpc to 1.38 locally avoids the issue.
Created attachment 229235 [details] Upgrade Tensorflow to 2.1.0 Could you try this patch. It upgrades tensorflow to version 2.1.0. I currently don't have access to a FreeBSD machine yet as I'm busy with University work. Seems that the only suitable way to fix 1.15.5 is to fix the linking with the latest GRPC, as creating a port of yet an older version would just cause more pollution. Tensorflow 1.15 already relies on a lot of outdated ports to build. Might be worth just putting the effort in to upgrade to the latest. I'm interested to see if the 2.1.0 version I currently use works fine.
(In reply to Anthony Donnelly from comment #1) Unfortunately, even after fixing the newlines in this, it doesn't build: /usr/bin/ar rcsD bazel-out/host/bin/tensorflow/core/lib/strings/libbase64.pic.a bazel-out/host/bin/tensorflow/core/lib/strings/_objs/base64/base64.pic.o) ERROR: /wrkdirs/usr/ports/science/py-tensorflow/work-py38/tensorflow-2.1.0/tensorflow/core/platform/BUILD:53:1: C++ compilation of rule '//tensorflow/core/platform:human_readable_json_impl' failed (Exit 1) tensorflow/core/platform/default/human_readable_json.cc:36:29: error: no member named 'error_message' in 'google::protobuf::util::status_internal::Status' auto error_msg = status.error_message(); ~~~~~~ ^ tensorflow/core/platform/default/human_readable_json.cc:54:29: error: no member named 'error_message' in 'google::protobuf::util::status_internal::Status' auto error_msg = status.error_message(); ~~~~~~ ^ 2 errors generated. Perhaps there's something locally causing this, not sure. Since sunpoet@ committed the latest grpc and tensorflow updates, maybe he will care to take a look?
Created attachment 232155 [details] patch for protobuf 3.19.4
(In reply to Steve Wills from comment #2) As I mentioned in the private mail, changing "error_message()" to "message()" would fix this. And you'll need attachment 232155 [details] for upcoming protobuf 3.19.4. from build log: ... SUBCOMMAND: # //tensorflow/core/platform:human_readable_json_impl [action 'Compiling tensorflow/core/platform/default/human_readable_json.cc'] (cd /wrkdirs/usr/ports/science/py-tensorflow/work-py38/bazel_out/90c479ef01ddf313fc3134a499f1a18f/execroot/org_tensorflow && \ exec env - \ PATH=/bin:/usr/bin/:/usr/local/bin \ PWD=/proc/self/cwd \ PYTHON_BIN_PATH=/usr/local/bin/python3.8 \ PYTHON_LIB_PATH=/usr/local/lib/python3.8/site-packages \ TF2_BEHAVIOR=1 \ TF_CONFIGURE_IOS=0 \ TF_SYSTEM_LIBS=absl_py,astor_archive,boringssl,com_github_googleapis_googleapis,com_github_googlecloudplatform_google_cloud_cpp,com_google_protobuf,curl,cython,double_conversion,enum34_archive,flatbuffers,functools32_archive,gast_archive,gif,grpc,hwloc,icu,jsoncpp_git,keras_applications_archive,libjpeg_turbo,lmdb,nasm,nsync,opt_einsum_archive,org_sqlite,pasta,pcre,png,pybind11,six_archive,snappy,swig,termcolor_archive,wrapt,zlib_archive \ /usr/bin/clang -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/freebsd-opt/bin/tensorflow/core/platform/_objs/human_readable_json_impl/human_readable_json.pic.d '-frandom-seed=bazel-out/freebsd-opt/bin/tensorflow/core/platform/_objs/human_readable_json_impl/human_readable_json.pic.o' -fPIC -D__CLANG_SUPPORT_DYN_ANNOTATION__ -iquote . -iquote bazel-out/freebsd-opt/bin -iquote external/com_google_protobuf -iquote bazel-out/freebsd-opt/bin/external/com_google_protobuf -iquote external/com_google_absl -iquote bazel-out/freebsd-opt/bin/external/com_google_absl -iquote external/nsync -iquote bazel-out/freebsd-opt/bin/external/nsync -iquote external/double_conversion -iquote bazel-out/freebsd-opt/bin/external/double_conversion -I/usr/local/include '-std=c++14' -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c tensorflow/core/platform/default/human_readable_json.cc -o bazel-out/freebsd-opt/bin/tensorflow/core/platform/_objs/human_readable_json_impl/human_readable_json.pic.o) SUBCOMMAND: # //tensorflow/core/platform:tracing_impl [action 'Compiling tensorflow/core/platform/default/tracing.cc'] ...
Created attachment 234250 [details] Change error_message to message in several places. I've just found this ticket and followed along with building TensorFlow 2.1. I've attached a patch that changes some error_message invocations into message, as recommended. It still won't build for me - there are six errors like this: ERROR: /root/py-tensorflow/work-py38/tensorflow-2.1.0/tensorflow/python/BUILD:354:1: C++ compilation of rule '//tensorflow/python:bfloat16_lib' failed (Exit 1) tensorflow/python/lib/core/bfloat16.cc:633:8: error: no matching function for call to object of type '(lambda at tensorflow/python/lib/core/bfloat16.cc:607:25)' if (!register_ufunc("equal", CompareUFunc<Bfloat16EqFunctor>, ^~~~~~~~~~~~~~ tensorflow/python/lib/core/bfloat16.cc:607:25: note: candidate function not viable: no overload of 'CompareUFunc' matching 'PyUFuncGenericFunction' (aka 'void (*)(char **, const long *, const long *, void *)') for 2nd argument auto register_ufunc = [&](const char* name, PyUFuncGenericFunction fn, ^
(In reply to Jared Jennings from comment #5) I would appreciate if you can guide on what patches need to be applied as I'm not able to move beyond "toolchain" error from Bazel when I initiate compilation. I spent 4-5 hrs yesterday going through bug listing with no success thus not able to identify the cause and resolution. Any suggestion, guidance will be greatly appreciated.
This bug no longer applies. I have a tensorflow port 2.9.1 port builds that is ready to submit, just running final tests in poudriere. So I think it's safe to close this bug.
Closing as per request of the science/tensorflow maintainer.