I've remembered an actual example from my work.
I was once involved in specifying a computer system which was procured through a competitive tender process. So I was the technical architect or somesuch.
In order to be accepted and paid for, it had to run a number of acceptance tests using reasonably well-known synthetic benchmarks, including but not limited to the Linpack/HPL benchmark used to rank computers on the "Top 500" list.
The final score was determined by an algorithm which included a lot of "e to the power x" specifications (so, antilogarithms to the base e), and if I recall correctly the winning bidder (us) had to specify a score and then match or exceed that score in reality.
I was able to understand that the logic behind the scoring gave significant weight to over-performance as well as significant penalties to under-performance, when compared to more simple "straight line" scores.
I was also confident that one of the benchmark acceptance tests we were going to over-perform by a significant margin, and so the "e to the power x" scoring for that result awarded us so many scoring points that we could hardly fail to match or exceed the aggregate score for all tests.
I'm not sure that many others with whom I worked understood this, but I was able to say something like "it'll be OK, trust me" and it was - we exceeded the score requirement significantly and the system was accepted and paid for.
Now that's not log tables of course, but an understanding of how logarithms worked was fundamental to my understanding and my ability to reassure my colleagues at the time.