Bills Introduced – 12-2-25
Yesterday, with both the House and Senate in session, there were 52 bills introduced. One of those bills will receive additional coverage in this blog:
S 3315 A bill to require the Secretary of Health and Human Services and the Director of the Cybersecurity and Infrastructure Security Agency to coordinate to improve cybersecurity in the health care and public health sectors, and for other purposes. Cassidy, Bill [Sen.-R-LA]
Healthcare Cybersecurity
It looks like S 3315 will be similar to S 5390, the Health Care Cybersecurity and Resiliency Act of 2024, that was introduced by Cassidy in November of 2024. No action was taken on that bill in the 118th Congress.
MIP Legislation
I would like to mention in passing two bills that will not receive additional coverage in this blog:
HR 6356 To establish protections for individual rights with respect to computational algorithms, and for other purposes. Clarke, Yvette D. [Rep.-D-NY-9]
S 3308 A bill to establish protections for individual rights with respect to computational algorithms, and for other purposes. Markey, Edward J. [Sen.-D-MA]
It is interesting to see these two bills using the term ‘computational algorithms’ when I suspect that they mean ‘artificial intelligence’ or ‘large language models’. At its most basic level a computational algorithm is nothing more than a predetermined sequence of mathematical equations applied to a set of numbers. I would be hard pressed to come up with a set of ‘individual rights’ that would apply to such a system.
I suppose that what the staffers are trying to do here is to point out that AI and LLM models are sophisticated algorithms operating on data sets that include information about individuals, making sterile computations on that data that effect individuals. And I too have reservations and concerns about what data is being accessed and how it is being used by these systems. But calling these systems ‘computational algorithms’, even if clearly defined within the legislation, just adds another layer of confusion about the problem.
The problem is not so much with the AI or LLM models, it is really about the huge amounts of data available on each and every individual in this country (really around the world but that is way beyond the scope of federal legislation). The rules governing the collection, storage, sharing and manipulation of that information are weak at best, and for most practical purposes non-existent. That is where the focus should be when trying to protect individual rights, not on the latest set of tools used to play with that data, those tools become obsolete too fast and are replaced with newer, more complex tools too quickly for legislation and regulations to keep up.