Conversation
|
Thank you so much!! I download your code and it fixed my Langfuse issue! |
|
Tried on my end as well and it is working. |
|
Guys, is this a critical fix that it has been weighing for 3 months without merging? |
|
Hi @nurtext . |
|
Hi @camucamulemon7, I'm currently unaware of this issue and I would need to investigate this new issue. Unfortunately I'm neither the autor of Langfuse nor the filter pipeline itself. It was just meant as a quick bugfix for my own infrastructure. Maybe the original contributors could have a look at it? Cheers. |
|
Update: There seem to be a new PR available, another fix for the pipeline - maybe this solves your issue: |
jkassie
left a comment
There was a problem hiding this comment.
Verified that this (in conjunction with pr-586) fixed the issues I was seeing with Langfuse v3.
jkassie
left a comment
There was a problem hiding this comment.
Actually, after looking at pr-856, in conjunction with this change causes issues. I'd suggest placing the call to trace.end() immediately before the # Flush data to Langfuse (398/400). As it stands if you fail to create the LLM generation you end up with an open trace. And if you apply both pr-557 and pr-586 you end up with multiple trace.end calls which causes issues.
Fixed a infinite trace situation visible in the Langfuse dashboard which lead to missing traces for metadata, used tokens/cost preview and wrong latency measurement.