How can I know which features making users active with the data stored in Snowflake?: Biz x Dev Roundtable
The Biz x Dev Roundtable series will provide solutions to data-related problems and trending topics in the form of a conversation between software engineers and business professionals.
Today's Attendees.
Hayato Onodera - Sales Manager @Morph
Manager of Corporate Sales at Morph.
Today he has a concern about using data for marketing content.
Naoto Shibata - CEO/Backend Engineer @Morph
He is our CEO, but also the lead backend engineer at Morph.
“I need to verify the reactions we get at business meetings with data as well.”
Onodera:
Today, I have an issue regarding data analysis of Morph's product data.
We are in the very early phase of our product, and we are considering what kind of messaging promotes customer acquisition.
At trade shows and business meetings, we often get good reactions to the planning of analysis tasks, code generation, and automation of periodic report generation, but we would like to understand which functions are actually used most frequently by beta users who are already using the product.
I know the application data is being transferred to Snowflake, so I'd like to use that to gain quantitative insight.
Shibata:
Okay. So, we'll do some simple analysis together at this meeting and see if we can figure out which data we can use.
As long as we use the data in Snowflake, it won't affect the application.
Let's start confirming the amount of data we have to analyze.
Shibata:
This is the Snowflake screen. There are more than 50 tables in total, so I'll put the ones I think I'll use for this application on the Morph canvas.
I'd like to know a little more about what you want to do.
Onodera:
Morph is a product to be used mainly for business purposes. We would consider a user who uses Morph more than once a week to be active, which means the user do some regular tasks on Morph.
I need to see how many users who fall into this category are using the task planning, code generation, and reporting functions.
Shibata:
Okay, I see. Then, let's first see how many records correspond to this. If not many matches to this, then we can say that it would be better to follow other indicators.
I'll try to tally the number of calls to Morph AI's task planning function by user ID.
This feature was released about a month ago, and several users have already used it over 100 times.
Onodera:
Oh, this is more than I expected. The person whose number is very high may have been experimenting with many things on a trial basis, but it also looks like they might be using it for business purposes.
I think we can find out who they are by merging this with the user table so we can conduct interviews.
Onodera:
Next, I would like to tally up the new reporting functionality as well.
“Better to use Python instead of SQL for this indicator.”
Shibata:
As for the reporting function, it is better to use Python instead of SQL because the output results are stored in JSON. It is necessary to parse the acquired JSON.
It's hard to draw this parsing process by myself, so I'll ask AI to write it for me.
Even for an engineer, this kind of thing is bothersome.
Onodera:
I see. Is it possible to save the data in Snowflake with the pre-processing of the data here done separately?
Shibata:
Yes, I think that would be possible once the indicators to be watched are fixed.
I got it!
Onodera:
There are many records here, too. It's been about a week since its release, so it's been used quite a bit.
Shibata:
Yes, it is.
If you want to see a different indicator, just change the key name in this part of the code, and you will get it.
“Just re-run the cell to get the latest results.”
Shibata:
If you rerun this cell, you can always get the latest data.
Onodera:
Thank you very much! I will use this as soon as possible to consider marketing messages and follow-up with existing users.
Comments.
Shibata:
I recognized that some information, such as table structures, was only known on the engineering side. It’s easy to request which data we need, but it’s hard to find how to obtain them from the complex application database. That’s why sometimes developers and business people have to work together.
Today, I explained the database schema and what was done with the code in as much detail as possible so that Onodera-san could edit them later.
In addition, recalling the syntax of data processing code that is not often used costs time, so the AI function was also useful in that sense.
Onodera:
I was surprised to see that my objective was achieved in just a few minutes. In v1.0, I had to ask an engineer to analyze the data each time and get the results the next day.
The cell we created in the interview for the article was useful in a real meeting with our client. Since I could grasp the usage situation beforehand, I could smoothly ask them about the circumstances under which they used the function. I realized that taking data on my own will broaden the scope of its use.