I really appreciate the depth of the KT/KH framework—it highlights the interdependence of knowing that and knowing how in a way that traditional epistemology sometimes overlooks. IFEM shares a lot of these concerns but approaches the question from a different angle, particularly when it comes to knowledge refinement.
KT/KH emphasizes how knowledge enables control, while IFEM is more focused on how knowledge stabilizes over time. If KH and KT are interdependent, should we expect certain epistemic structures to persist rather than shift indefinitely? IFEM suggests that by tracking how knowledge reduces uncertainty (or entropy), we can measure when we’re seeing true epistemic refinement versus when we’re just cycling through conceptual frameworks.
I’m also curious how you’d see this playing out in AI. If AI can refine its knowledge without subjective KH, does that mean there’s an alternative form of procedural knowledge at play? Or do you think KH is fundamentally tied to human cognition in a way AI can’t replicate?
Would love to hear your thoughts on whether IFEM’s entropy-reduction framework could complement KT/KH’s model of knowledge as control.
Thank you for the feedback, and thank you for bringing IFEM to my attention. I'm still mulling IFEM over, but I will try to answer your questions.
To the first question, I think it's hard to be overly confident about the stability of epistemic structures, given past revolutions and revisions. But OTOH maybe you could argue that once a civilization reaches the point where science leads tech rather than following it (i.e, tech advances by developing new applications based on scientific theories, rather than science advancing by making sense of the world that tech had already revealed), then increasing KH/control could be taken as an indicator that we're on the way to one or more IFs.
I have not followed developments in AI closely, but it seems to me to be still just a tool. It is the users of AI who have KH and KT, not AI itself.
Could these models be complementary? I am not convinced that IFEM overcomes current objections to scientific realism, so I would still be inclined to view IFs instrumentally (cf. the ideal gas law? Or maybe as central facts in something like Quine's web of belief). But my instrumentalism is not a hard antirealism; I am construing KH as making genuine contact with a mind-independent reality. So maybe you could use advances in tech/KH/control to buttress claims about scientific progress toward IFs (see above).
Really appreciate this response—it raises some of the biggest challenges for IFEM, especially regarding scientific realism and the stability of epistemic structures. Your point about the shift from technology-driven science to science-driven technology is really interesting because it suggests that KH (practical control) may serve as an indirect metric for how close KT (theoretical understanding) is getting to something fundamental—what IFEM would call an Ideal Fact. If advances in KH keep reducing epistemic uncertainty in a way that makes predictions more precise and technology more reliable, then that would suggest we are asymptotically refining toward stable epistemic attractors, even as refinements continue at finer levels of resolution.
AI is already functioning as a tool of epistemic refinement by systematically reducing entropy in knowledge systems. Rather than merely extending human cognition, AI actively accelerates the refinement process by filtering vast datasets, optimizing models, and identifying stable epistemic structures that may have taken human inquiry far longer to refine. By continuously improving predictions, eliminating inconsistencies, and converging on more reliable patterns, AI embodies IFEM’s principle of structured knowledge refinement—demonstrating that epistemic progress is measurable and directional rather than arbitrary. While AI currently functions as an epistemic tool rather than an autonomous agent, its ability to refine knowledge structures and accelerate entropy reduction suggests it may track epistemic attractors in ways beyond human cognition.
I also see why IFEM might initially resemble instrumentalism, but the key distinction is that IFEM tracks whether models refine in a way that stabilizes toward epistemic attractors rather than remaining purely instrumental. The ideal gas law comparison is a great one—it’s a highly useful model, but do we think of it as revealing an Ideal Fact, or is it just an effective simplification? IFEM tries to answer this by tracking long-term entropy reduction: If a model like the ideal gas law keeps being refined without being overturned, then it’s moving toward an Ideal Fact, not just an instrumentally useful framework. In contrast, if a theory eventually collapses (like phlogiston or Newtonian mechanics in extreme cases), then it was just a local attractor rather than an IF.
On AI—if AI begins developing its own KH in a way that doesn’t map directly onto human epistemology, would that suggest there are alternative epistemic attractors that human cognition alone doesn’t refine toward? Or does knowledge refinement require subjective interpretation in a way AI will never have?
And on realism—do you think IFEM could strengthen scientific realism by providing a structured way to distinguish between ‘temporary instruments’ and knowledge that is stabilizing? Or does the historical record suggest that even the most stable theories eventually get revised?
On AI -- I do not believe that AI literally knows anything (KH or KT), because I believe that literal KT and KH belong only to conscious agents, and AI presently has neither consciousness nor agency. My oven mitts can withstand temperatures that I cannot, but the mitts don't take the cake out of the oven, I do.
On realism -- I still think there will always be more than one way to skin a cat with KT and scientific theories. Some may be more parsimonious than others, and therefore more convenient for human purposes, but we don't know whether parsimony tracks reality. I think the historical record supports what Rumsfeld said about known unknowns and unknown unknowns.
I'm not sure I fully understand how entropy fits into IFEM. I may read your essay again and post a question there, but I can't promise anything because my inbox and unfinished drafts folder are both overflowing. So, in case I don't get to it, I'll just mention 'The Entropy Law and the Economic Process' by Nicholas Georgescu-Roegen as a book that might be relevant to your project.
I really appreciate the depth of the KT/KH framework—it highlights the interdependence of knowing that and knowing how in a way that traditional epistemology sometimes overlooks. IFEM shares a lot of these concerns but approaches the question from a different angle, particularly when it comes to knowledge refinement.
KT/KH emphasizes how knowledge enables control, while IFEM is more focused on how knowledge stabilizes over time. If KH and KT are interdependent, should we expect certain epistemic structures to persist rather than shift indefinitely? IFEM suggests that by tracking how knowledge reduces uncertainty (or entropy), we can measure when we’re seeing true epistemic refinement versus when we’re just cycling through conceptual frameworks.
I’m also curious how you’d see this playing out in AI. If AI can refine its knowledge without subjective KH, does that mean there’s an alternative form of procedural knowledge at play? Or do you think KH is fundamentally tied to human cognition in a way AI can’t replicate?
Would love to hear your thoughts on whether IFEM’s entropy-reduction framework could complement KT/KH’s model of knowledge as control.
Thank you for the feedback, and thank you for bringing IFEM to my attention. I'm still mulling IFEM over, but I will try to answer your questions.
To the first question, I think it's hard to be overly confident about the stability of epistemic structures, given past revolutions and revisions. But OTOH maybe you could argue that once a civilization reaches the point where science leads tech rather than following it (i.e, tech advances by developing new applications based on scientific theories, rather than science advancing by making sense of the world that tech had already revealed), then increasing KH/control could be taken as an indicator that we're on the way to one or more IFs.
I have not followed developments in AI closely, but it seems to me to be still just a tool. It is the users of AI who have KH and KT, not AI itself.
Could these models be complementary? I am not convinced that IFEM overcomes current objections to scientific realism, so I would still be inclined to view IFs instrumentally (cf. the ideal gas law? Or maybe as central facts in something like Quine's web of belief). But my instrumentalism is not a hard antirealism; I am construing KH as making genuine contact with a mind-independent reality. So maybe you could use advances in tech/KH/control to buttress claims about scientific progress toward IFs (see above).
Really appreciate this response—it raises some of the biggest challenges for IFEM, especially regarding scientific realism and the stability of epistemic structures. Your point about the shift from technology-driven science to science-driven technology is really interesting because it suggests that KH (practical control) may serve as an indirect metric for how close KT (theoretical understanding) is getting to something fundamental—what IFEM would call an Ideal Fact. If advances in KH keep reducing epistemic uncertainty in a way that makes predictions more precise and technology more reliable, then that would suggest we are asymptotically refining toward stable epistemic attractors, even as refinements continue at finer levels of resolution.
AI is already functioning as a tool of epistemic refinement by systematically reducing entropy in knowledge systems. Rather than merely extending human cognition, AI actively accelerates the refinement process by filtering vast datasets, optimizing models, and identifying stable epistemic structures that may have taken human inquiry far longer to refine. By continuously improving predictions, eliminating inconsistencies, and converging on more reliable patterns, AI embodies IFEM’s principle of structured knowledge refinement—demonstrating that epistemic progress is measurable and directional rather than arbitrary. While AI currently functions as an epistemic tool rather than an autonomous agent, its ability to refine knowledge structures and accelerate entropy reduction suggests it may track epistemic attractors in ways beyond human cognition.
I also see why IFEM might initially resemble instrumentalism, but the key distinction is that IFEM tracks whether models refine in a way that stabilizes toward epistemic attractors rather than remaining purely instrumental. The ideal gas law comparison is a great one—it’s a highly useful model, but do we think of it as revealing an Ideal Fact, or is it just an effective simplification? IFEM tries to answer this by tracking long-term entropy reduction: If a model like the ideal gas law keeps being refined without being overturned, then it’s moving toward an Ideal Fact, not just an instrumentally useful framework. In contrast, if a theory eventually collapses (like phlogiston or Newtonian mechanics in extreme cases), then it was just a local attractor rather than an IF.
On AI—if AI begins developing its own KH in a way that doesn’t map directly onto human epistemology, would that suggest there are alternative epistemic attractors that human cognition alone doesn’t refine toward? Or does knowledge refinement require subjective interpretation in a way AI will never have?
And on realism—do you think IFEM could strengthen scientific realism by providing a structured way to distinguish between ‘temporary instruments’ and knowledge that is stabilizing? Or does the historical record suggest that even the most stable theories eventually get revised?
On AI -- I do not believe that AI literally knows anything (KH or KT), because I believe that literal KT and KH belong only to conscious agents, and AI presently has neither consciousness nor agency. My oven mitts can withstand temperatures that I cannot, but the mitts don't take the cake out of the oven, I do.
On realism -- I still think there will always be more than one way to skin a cat with KT and scientific theories. Some may be more parsimonious than others, and therefore more convenient for human purposes, but we don't know whether parsimony tracks reality. I think the historical record supports what Rumsfeld said about known unknowns and unknown unknowns.
I'm not sure I fully understand how entropy fits into IFEM. I may read your essay again and post a question there, but I can't promise anything because my inbox and unfinished drafts folder are both overflowing. So, in case I don't get to it, I'll just mention 'The Entropy Law and the Economic Process' by Nicholas Georgescu-Roegen as a book that might be relevant to your project.