-
Notifications
You must be signed in to change notification settings - Fork 357
Speed up performance for DataRows #4774
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
mpolson64
wants to merge
4
commits into
facebook:main
Choose a base branch
from
mpolson64:export-D90713603
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|
@mpolson64 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D90713603. |
mpolson64
added a commit
to mpolson64/Ax
that referenced
this pull request
Jan 15, 2026
Summary: Pull Request resolved: facebook#4774 Misc improvements and tricks to make DataRows more performant. We're within spitting distance of the original implementation with dataframes, close enough that Im willing to consider the difference is likely due to scheduler noise; IMO good enough to land. 1. Removed isinstance check from Data init -- this was helpful when refactoring since some calls to Data(df) didnt use kwargs and caused errors, but added unnecessary overhead 2. **[BIG IMPROVEMENT]** Used df.itertuples instead of df.iterrows in Data init when initializing from a dataframe. This alone took us from 1h 44m to ~40m 3. New empty, metric_names, and trial_indices properties which dont require constructing full_df 4. Changes to Experiment.attach_data which operate directly on list[DataRows] instead of on DataFrames (ie migrating from combine_df_favoring_recent helper fn to new combine_data_rows_favoring_recent fn) 5. Changed [*foo] to list(foo) in a couple places. Metamate tells me this is faster in extremely high data regimes -- not sure I notice a difference or trust it necessarily. Remaining TODOs: Id be interested in removing `property` from the methods which are not O(1); theres a lot of fairly expensive things we do in Data, or at least things which require a full scan, which look like they should be fast because they have the same syntax as an attribute lookup. If nobody has any objections to this Ill ask Metamate to do this for us Differential Revision: D90713603
3938995 to
623655e
Compare
Summary: TData was necesssary whern we had multiple different Data classes, but recent developments have made this no longer needed Differential Revision: D90596942
Summary: Moved these tests into TestData, since Data is the only data-related class in Ax. Differential Revision: D90605845
Summary: NOTE: This is much slower than the implementation which is backed by a dataframe. For clarity, Ive put this naive implementation up as its own diff and the next diff hunts for speedups. Creates new source of truth for Data: the DataRow. The df is now a cached property which is dynamically generated based on these rows. In the future, these will become a Base object in SQLAlchemy st. Data will have a SQLAlchemy relationship to a list of DataRows which live in their own table. RFC: 1. Im renaming sem -> se here (but keeping sem in the df for now, since this could be an incredibly involved cleanup). Do we have alignment that this is a positive change? If so I can either start of backlog the cleanup across the codebase. cc Balandat who Ive talked about this with a while back. 2. This removes the ability for Data to contain arbitrary columns, which was added in D83682740 and afaik unused. Arbitrary new columns would not be compatible with the new storage setup (it was easy in the old setup which is why we added it), and I think we should take a careful look at how to store contextual data in the future in a structured way. Differential Revision: D90605846
Summary: Misc improvements and tricks to make DataRows more performant. We're within spitting distance of the original implementation with dataframes, close enough that Im willing to consider the difference is likely due to scheduler noise; IMO good enough to land. 1. Removed isinstance check from Data init -- this was helpful when refactoring since some calls to Data(df) didnt use kwargs and caused errors, but added unnecessary overhead 2. **[BIG IMPROVEMENT]** Used df.itertuples instead of df.iterrows in Data init when initializing from a dataframe. This alone took us from 1h 44m to ~40m 3. New empty, metric_names, and trial_indices properties which dont require constructing full_df 4. Changes to Experiment.attach_data which operate directly on list[DataRows] instead of on DataFrames (ie migrating from combine_df_favoring_recent helper fn to new combine_data_rows_favoring_recent fn) 5. Changed [*foo] to list(foo) in a couple places. Metamate tells me this is faster in extremely high data regimes -- not sure I notice a difference or trust it necessarily. Remaining TODOs: Id be interested in removing `property` from the methods which are not O(1); theres a lot of fairly expensive things we do in Data, or at least things which require a full scan, which look like they should be fast because they have the same syntax as an attribute lookup. If nobody has any objections to this Ill ask Metamate to do this for us Differential Revision: D90713603
623655e to
91786af
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
Misc improvements and tricks to make DataRows more performant. We're within spitting distance of the original implementation with dataframes, close enough that Im willing to consider the difference is likely due to scheduler noise; IMO good enough to land.
Remaining TODOs:
Id be interested in removing
propertyfrom the methods which are not O(1); theres a lot of fairly expensive things we do in Data, or at least things which require a full scan, which look like they should be fast because they have the same syntax as an attribute lookup. If nobody has any objections to this Ill ask Metamate to do this for usDifferential Revision: D90713603