I have to do a lot of slides and emails. So I can dump in massive email threas, tell it the outcome and address every concern in my writing style. Then I read it over, delete parts and hit send. Same with prepping presentations. I can say recap the quarter’s highlights for xyz, pick a few and move on. I still have to present them and read things, but that’s getting to be the large part of the job.
I’ve definitely cut down a lot of my workload with them.
I have data between multiple external systems I need to manage, so I built some tools to automate it.
Since the LLM agent can do follow up queries against the APIs independently it excels at cross referencing between systems. This work is impossible to know beforehand, so you can’t traditionally automate it.
So what used to waste several hours a week of pulling data, searching with crap search tools, pulling data again, etc. In 12 browser tabs is now just an LLM doing a couple dozen tool calls.
That leaves me to just take the output and act on it.
I’ve done tests and it is accurate and reproducible enough. Plus I already have a good sense of when such info exists, so if it is wrong I can trivially redirect it.
deleted by creator
I have to do a lot of slides and emails. So I can dump in massive email threas, tell it the outcome and address every concern in my writing style. Then I read it over, delete parts and hit send. Same with prepping presentations. I can say recap the quarter’s highlights for xyz, pick a few and move on. I still have to present them and read things, but that’s getting to be the large part of the job.
deleted by creator
I’ve definitely cut down a lot of my workload with them.
I have data between multiple external systems I need to manage, so I built some tools to automate it.
Since the LLM agent can do follow up queries against the APIs independently it excels at cross referencing between systems. This work is impossible to know beforehand, so you can’t traditionally automate it.
So what used to waste several hours a week of pulling data, searching with crap search tools, pulling data again, etc. In 12 browser tabs is now just an LLM doing a couple dozen tool calls.
That leaves me to just take the output and act on it.
I’ve done tests and it is accurate and reproducible enough. Plus I already have a good sense of when such info exists, so if it is wrong I can trivially redirect it.
deleted by creator
I wrote model context protocol servers