Datasets:
question_id int64 59.5M 79.7M | creation_date stringdate 2020-01-01 00:00:00 2025-07-15 00:00:00 | link stringlengths 60 163 | question stringlengths 53 28.9k | accepted_answer stringlengths 26 29.3k | question_vote int64 1 410 | answer_vote int64 -9 482 |
|---|---|---|---|---|---|---|
79,320,289 | 2024-12-31 | https://stackoverflow.com/questions/79320289/why-cant-i-wrap-lgbm | I'm using LGBM to forecast the relative change of a numerical quantity. I'm using the MSLE (Mean Squared Log Error) loss function to optimize my model and to get the correct scaling of errors. Since MSLE isn't native to LGBM, I have to implement it myself. But lucky me, the math can be simplified a ton. This is my impl... | Root Cause scikit-learn expects that each of the keyword arguments to an estimator's __init__() will exactly correspond to a public attribute on instances of the class. Per https://scikit-learn.org/stable/developers/develop.html every keyword argument accepted by __init__ should correspond to an attribute on the insta... | 1 | 1 |
79,320,303 | 2024-12-31 | https://stackoverflow.com/questions/79320303/artifacts-with-pygame-when-trying-to-update-visible-sprites-only | I'm learning the basics of the pygame library and already struggling. The "game" at this point only has a player and walls. There are 2 main surfaces: "world" (the actual game map) and "screen" (which serves as a viewport for "view_src" w/ scaling & scrolling, "viewport" is the corresponding rect). Here's the problem: ... | Found the problem and the fix thanks to Kingsley's nudge. The issue: Group.clear() clears the sprites drawn by the last .draw() of that exact same group. So using a different group for .clear() and .draw() doesn't work, and the continuity it needs to function is also lost by re-assigning the "visible" group each time. ... | 2 | 0 |
79,316,973 | 2024-12-30 | https://stackoverflow.com/questions/79316973/improve-computational-time-and-memory-usage-of-the-calculation-of-a-large-matrix | I want to calculate a Matrix G that its elements is a scalar and are calculated as: I want to calculated this matrix for a large n > 10000, d>30. My code is below but it has a huge overhead and it still takes very long time. How can I make this computation at the fastest possible way? Without using GPU and Minimize th... | A convenient way is to note that each entry could also be written as : with above notation the computation could be much easier and: import numpy as np from tqdm import tqdm from sklearn.gaussian_process.kernels import Matern from yaspin import yaspin import time from memory_profiler import profile ##-----------------... | 1 | 2 |
79,313,502 | 2024-12-28 | https://stackoverflow.com/questions/79313502/extracting-owner-s-username-from-nested-page-on-huggingface | I am scraping the HuggingFace research forum (https://discuss.huggingface.co/c/research/7/l/latest) using Selenium. I am able to successfully extract the following attributes from the main page of the forum: Activity Date View Count Replies Count Title URL However, I am encountering an issue when trying to extract th... | All the data you're after comes from two API endpoints. Most of what you already have can be fetched from the frist one. If you follow the post, you'll get even more data and you'll find the posters section, there you can find your owner aka Original Poster. This is just to push you in the right direction (and no selen... | 2 | 2 |
79,319,663 | 2024-12-31 | https://stackoverflow.com/questions/79319663/fastapi-apache-409-response-from-fastapi-is-converted-to-502-what-can-be-the | I have a FastAPI application, which, in general, works fine. My setup is Apache as a proxy and FastAPI server behind it. This is the apache config: ProxyPass /fs http://127.0.0.1:8000/fs retry=1 acquire=3000 timeout=600 Keepalive=On disablereuse=ON ProxyPassReverse /fs http://127.0.0.1:8000/fs I have one endpoint that... | So, i have found the reason. When there is file upload you need to read the input buffer in any case, even if you want to return the error. In my case i had to add try: except: to empty the buffer when exception happens. Something like try: ... my original code except Exception as e: # Empty input buffer here to avoid ... | 1 | 0 |
79,316,958 | 2024-12-30 | https://stackoverflow.com/questions/79316958/mlagents-learn-help-is-giving-errors-python-3-11-3-10-3-9-3-8 | I am trying to install mlagents. I got to the part in python but after creating a virtual enviorment with pyenv and setting the local version to 3.10, 3.9, and 3.8 it works on none of them. I upgraded pip, installed mlagents, then torch,torchvision, and torchaudio. Then I tested mlagents-learn --help and then because o... | Try deleting your unity project and making a new one. Unity says to use conda so try that too. Use python 3.9. | 1 | 2 |
79,318,540 | 2024-12-30 | https://stackoverflow.com/questions/79318540/django-model-foreign-key-to-whichever-model-calls-it | I am getting back into Django after a few years, and am running into the following problem. I am making a system where there are 2 models; a survey, and an update. I want to make a notification model that would automatically have an object added when I add a survey object or update object, and the notification object w... | GenericForeignKey to the rescue: A normal ForeignKey can only “point to” one other model, which means that if the TaggedItem model used a ForeignKey it would have to choose one and only one model to store tags for. The contenttypes application provides a special field type (GenericForeignKey) which works around this a... | 2 | 2 |
79,319,263 | 2024-12-31 | https://stackoverflow.com/questions/79319263/why-does-geopandas-dissolve-function-keep-working-forever | All, I am trying to use the Geopandas dissolve function to aggregate a few countries; the function countries.dissolve keeps running forever. Here is a minimal script. import geopandas as gpd shape='/Volumes/TwoGb/shape/fwdshapfileoftheworld/' countries=gpd.read_file(shape+'TM_WORLD_BORDERS-0.3.shp') # Add columns count... | Dissolve is working when I try it, it finishes in a few seconds. My Geopandas version is 1.0.1. import geopandas as gpd df = gpd.read_file(r"C:\Users\bera\Downloads\TM_WORLD_BORDERS-0.3.shp") df.plot(column="NAME") df2 = df.dissolve() df2.plot() There are some invalid geometries that might cause problems for you? T... | 1 | 2 |
79,318,939 | 2024-12-31 | https://stackoverflow.com/questions/79318939/loaded-keras-model-throws-error-while-predicting-likely-issues-with-masking | I am currently developing and testing a RNN that relies upon a large amount of data for training, and so have attempted to separate my training and testing files. I have one file where I create, train, and save a tensorflow.keras model to a file 'model.keras' I then load this model in another file and predict some valu... | That error is due to the mask_value that you pass into tf.keras.layers.Masking not getting serialized compatibly for deserialization. But because you masking layer is a tensor containing all 0s anyway, you can instead just pass a scalar value like this and it will eliminate the need to serialize a tensor while storing ... | 1 | 1 |
79,320,886 | 2024-12-31 | https://stackoverflow.com/questions/79320886/numpy-einsum-why-did-this-happen | Can you explain why this happened? import numpy as np a = np.array([[1,2], [3,4], [5,6] ]) b = np.array([[2,2,2], [2,2,2]]) print(np.einsum("xy,zx -> yx",a,b)) and output of the code is:[[ 4 12 20] [ 8 16 24]] Which means the answer is calculated like this : [1*2+1*2 , 3*2+3*2 , ...] But I expected it to be calcul... | Your code is equivalent to: (a[None] * b[..., None]).sum(axis=0).T You start with a (x, y) and b (z, x). First let's align the arrays: # a[None] shape: (1, x, y) array([[[1, 2], [3, 4], [5, 6]]]) # b[..., None] shape: (z, x, 1) array([[[2], [2], [2]], [[2], [2], [2]]]) and multiply: # a[None] * b[..., None] shape: (z... | 1 | 1 |
79,320,784 | 2024-12-31 | https://stackoverflow.com/questions/79320784/bot-not-responding-to-channel-posts-in-telegram-bot-api-python-telegram-bot | I'm developing a Telegram bot using python-telegram-bot to handle and reply to posts in a specific channel. The bot starts successfully and shows "Bot is running...", but it never replies to posts in the channel. Here's the relevant code for handling channel posts: async def handle_channel_post(self, update: Update, co... | The issue is with this part of the code: if message.chat.username != self.channel_username: return The message.chat.username returns the channel username without the '@' and your self.channel.username includes '@' Try this: if message.chat.username != self.channel_username.replace("@", ""): return It removes '@' from... | 3 | 2 |
79,318,200 | 2024-12-30 | https://stackoverflow.com/questions/79318200/return-placeholder-values-with-formatting-if-a-key-is-not-found | I want to silently ignore KeyErrors and instead replace them with placeholders if values are not found. For example: class Name: def __init__(self, name): self.name = name self.capitalized = name.capitalize() def __str__(self): return self.name "hello, {name}!".format(name=Name("bob")) # hello, bob! "greetings, {name.c... | TL;DR The best solution is to override get_field instead of get_value in CustomFormatter: class CustomFormatter(string.Formatter): def get_field(self, field_name, args, kwargs): try: return super().get_field(field_name, args, kwargs) except (AttributeError, KeyError): return f"{{{field_name}}}", None Kuddos to @blhsin... | 2 | 2 |
79,320,041 | 2024-12-31 | https://stackoverflow.com/questions/79320041/python-flask-blueprint-parameter | I need to pass a parameter (some_url) from the main app to the blueprint using Flask This is my (oversimplified) app app = Flask(__name__) app.register_blueprint(my_bp, url_prefix='/mybp', some_url ="http....") This is my (oversimplified) blueprint my_bp = Blueprint('mybp', __name__, url_prefix='/mybp') @repositories_... | you can use g object for the current request which stores temporary data, or you can use session to maintain data between multiple requests which usually stores this data in the client browser as a cookie, or you can store the data in the app.config to maintain a constant value. | 1 | 0 |
79,318,743 | 2024-12-30 | https://stackoverflow.com/questions/79318743/how-to-create-combinations-from-dataframes-for-a-specific-combination-size | Say I have a dataframe with 2 columns, how would I create all possible combinations for a specific combination size? Each row of the df should be treated as 1 item in the combination rather than 2 unique separate items. I want the columns of the combinations to be appended to the right. The solution should ideally be e... | An approach is itertools to generate the combinations. Define the combination size and generate all possible combinations of rows using itertools.combinations Flatten each combination into a single list of values using itertools.chain. combination_df is created from the flattened combinations and the columns are dynam... | 1 | 1 |
79,319,708 | 2024-12-31 | https://stackoverflow.com/questions/79319708/confused-by-documentation-about-behavior-of-globals-within-a-function | Per the Python documentation of globals(): For code within functions, this is set when the function is defined and remains the same regardless of where the function is called. I understood this as calling globals() from within a function returns an identical dict to the one that represented the global namespace when ... | In fact this problem is only loosely related to the globals() builtin function but more closely related to the behaviour of mutable objects. Long story made short, your observation is correct, and the documentation is absolutely correct and accurate. The underlying cause, is that Python variables are only references to... | 1 | 1 |
79,319,434 | 2024-12-31 | https://stackoverflow.com/questions/79319434/duplicate-null-columns-created-during-pivot-in-polars | I have this example dataframe in polars: df_example = pl.DataFrame( { "DATE": ["2024-11-11", "2024-11-11", "2024-11-12", "2024-11-12", "2024-11-13"], "A": [None, None, "option1", "option2", None], "B": [None, None, "YES", "YES", "NO"], } ) Which looks like this: DATE A B 0 2024-11-11 1 2024-11-11 2 202... | I ended up with: ( df_example.pipe( lambda df: df.group_by("DATE").agg( [ pl.col(col).eq(val).any().alias(f"{col}_{val}") for col in df.select(pl.exclude("DATE")).columns for val in df.get_column(col).unique().drop_nulls() ] ) ).sort("DATE") ) | 2 | 1 |
79,319,156 | 2024-12-31 | https://stackoverflow.com/questions/79319156/how-to-add-python-type-annotations-to-a-class-that-inherits-from-itself | I'm trying add type annotations to an ElementList object that inherits from list and can contain either Element objects or other ElementGroup objects. When I run the following code through mypy: from typing import Self class Element: pass class ElementList(list[Element | Self]): pass elements = ElementList( [ Element()... | Your sample list argument to the ElementList constructor contains not just Elements and ElementLists but also actual lists, so a workaround of class ElementList(list["Element | ElementList"]): ... would not have worked, as @dROOOze pointed out in the comment, because list is not a subtype of ElementList. You can work a... | 1 | 1 |
79,317,395 | 2024-12-30 | https://stackoverflow.com/questions/79317395/multi-columns-legend-in-geodataframe | I tried to plot Jakarta's map based on the district. fig, ax = plt.subplots(1, figsize=(4.5,10)) jakarta_mandiri_planar.plot(ax=ax, column='Kecamatan', legend=True, legend_kwds={'loc':'center left'}) leg= ax.get_legend() leg.set_bbox_to_anchor((1.04, 0.5)) I plotted the legend on the right of the map, but I think it'... | Use the ncols keyword: df.plot(column="NAME", cmap="tab20", legend=True, figsize=(8,8)) df.plot(column="NAME", cmap="tab20", legend=True, figsize=(10,10), legend_kwds={"ncols":2, "loc":"lower left"}) | 1 | 1 |
79,315,937 | 2024-12-29 | https://stackoverflow.com/questions/79315937/in-ta-lib-cython-compiler-errors-internalerror-internal-compiler-error-com | While running a program on pycharm I am getting below error while running on pycharm using python. Unable to run the program due to below error: ERROR: Failed building wheel for TA-Lib-Precompiled ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (TA-Lib-Precompiled) > Package :TA-... | The stable release of TA-Lib-Precompiled only has wheels for Python 3.8 - 3.11 for Linux. You can install The Windows Subsystem for Linux (WSL) which provides a Linux environment on your Windows machine and then use a supported Python version such as Python 3.11. See How to install Linux on Windows with WSL for detaile... | 2 | 1 |
79,317,602 | 2024-12-30 | https://stackoverflow.com/questions/79317602/python-selenium-need-help-in-locating-username-and-password | i am new to selenium . i am trying to scrape financial data on tradingview. i am trying to log into https://www.tradingview.com/accounts/signin/ . i understand that i am facing a timeout issue right now, is there any way to fix this? thank you to anybody helping. much appreciated. however, i am facing alot of errors wi... | To locate the login form on the sign-in page, it is necessary to click the "Email" button first in order to proceed with submitting the login form. I have included the following two lines in the script to accomplish this. email_button = driver.find_element(By.XPATH, "//button[@name='Email']") email_button.click() The ... | 1 | 1 |
79,317,247 | 2024-12-30 | https://stackoverflow.com/questions/79317247/how-to-do-a-clean-install-of-python-from-source-in-a-docker-container-image-ge | Currently I have to create Docker images that build python from source (for example we do need two different python versions in a container, one python version for building and one for testing the application, also we need to exactly specify the python version we want to install and newer versions are not supported via... | Research and read dockerfile best practices, for example https://docs.docker.com/build/building/best-practices/#apt-get . Remove src directory and any build aftefacts after you are done installing. Remove packages in the same stage as you install them. Additionally, you might be interested in pyenv project that streaml... | 2 | 1 |
79,317,098 | 2024-12-30 | https://stackoverflow.com/questions/79317098/python-logging-filter-works-with-console-but-still-writes-to-file | I am saving the logs to a text file and displaying them to the console at the same time. I would like to apply a filter on the logs, so that some logs neither make it to the text file nor the console output. However, with this code, the logs that I would like to filter out are still being saved to the text file. The fi... | basicConfig created a FileHandler and a StreamHandler was also created and added to the logger. The filter was only applied to the StreamHandler. To filter both handlers, add the filter to the logger instead: import logging class applyFilter(logging.Filter): def filter(self, record): return not record.getMessage().star... | 1 | 0 |
79,316,851 | 2024-12-30 | https://stackoverflow.com/questions/79316851/sympy-integration-with-cosine-function-under-a-square-root | I am trying to solve the integration integrate( sqrt(1 + cos(2 * x)), (x, 0, pi) ) Clearly, through pen and paper this is not hard, and the result is: But when doing this through Sympy, something does not seem correct. I tried the sympy codes as below. from sympy import * x = symbols("x", real=True) integrate(sqrt(1 ... | Adding a simplification in there will produce the correct result, but I'm not sure why it is having an issue in the first place. integrate(sqrt(1+cos(2*x)).simplify(), (x, 0, pi)) # 2*sqrt(2) | 5 | 3 |
79,316,346 | 2024-12-29 | https://stackoverflow.com/questions/79316346/how-to-include-exception-handling-within-a-python-pool-starmap-multiprocess | I'm using the metpy library to do weather calculations. I'm using the multiprocessing library to run them in parallel, but I get rare exceptions, which completely stop the program. I am not able to provide a minimal, reproducible example because I can't replicate the problems with the metpy library functions and becaus... | You can generalize run_ccl with a wrapper function that suppresses specified exceptions and returns NaN as a default value: from contextlib import suppress def suppressor(func, *exceptions): def wrapper(*args, **kwargs): with suppress(*exceptions): return func(*args, **kwargs) return float('nan') return wrapper with w... | 1 | 2 |
79,316,278 | 2024-12-29 | https://stackoverflow.com/questions/79316278/is-there-a-more-elegant-rewrite-for-this-python-enum-value-of-implementation | I would like to get a value_of implementation for the StrEnum (Python 3.9.x). For example: from enum import Enum class StrEnum(str, Enum): """Enum with str values""" pass class BaseStrEnum(StrEnum): """Base Enum""" @classmethod def value_of(cls, value): try: return cls[value] except KeyError: try: return cls(value) exc... | Since upon success of the first try block the function will return and won't execute the code that follows, there is no need to nest the second try block in the error handler of the first try block to begin with: def value_of(cls, value): try: return cls[value] except KeyError: pass try: return cls(value) except ValueE... | 2 | 2 |
79,316,309 | 2024-12-29 | https://stackoverflow.com/questions/79316309/how-does-this-code-execute-the-finally-block-even-though-its-never-evaluated-to | def divisive_recursion(n): try: if n <= 0: return 1 else: return n + divisive_recursion(n // divisive_recursion(n - 1)) except ZeroDivisionError: return -1 finally: if n == 2: print("Finally block executed for n=2") elif n == 1: print("Finally block executed for n=1") print(divisive_recursion(5)) Here, divisive_recurs... | In one of the comments, you ask "does that mean once the program encounters the crash, it will execute all the finally blocks upward the recursion before it finally crashes". And the answer is basically "yes". An exception isn't really a "crash", or perhaps think of it as a controlled way of crashing. Here is a simple ... | 2 | 3 |
79,316,399 | 2024-12-29 | https://stackoverflow.com/questions/79316399/how-do-i-remove-an-image-overlay-in-matplotlib | Using matplotlib and python, I have a grey-scale image of labeled objects, on which I want to draw a homogeneously coloured overlay image with a position and shape based on a changeable input parameter - an object identifier. Basically an outline and enhancement of on of the objects in the image. I can generate the ove... | If you’re trying to update overlay on a grayscale without accumulating overlays, you should use this approach: import matplotlib.pyplot as plt import numpy as np import numpy.ma as ma def create_interactive_overlay(object_data): """ Creates a figure with a grayscale base image and functions to update overlays. Paramete... | 1 | 1 |
79,306,760 | 2024-12-25 | https://stackoverflow.com/questions/79306760/how-to-get-full-traceback-messages-when-the-open-syscall-is-banned | I am working on providing an environment for running users' untrusted python code. I use the python bindings of libseccomp library to avoid triggering unsafe system calls, and the service is running in a docker container. Here is the script that will be executed in my environment. P.S. The list of banned syscalls is fr... | EDIT: You will have to grant write access to stdout and stderr. Since these files are opened as the process is started, you can selectively restrict write access to these files only without having to worry about untrusted code modifying other files. You can add write permissions to stdout and stderr in your code like t... | 1 | 1 |
79,311,280 | 2024-12-27 | https://stackoverflow.com/questions/79311280/dask-var-and-std-with-ddof-in-groupby-context-and-other-aggregations | Suppose I want to compute variance and/or standard deviation with non-default ddof in a groupby context, I can do: df.groupby("a")["b"].var(ddof=2) If I want that to happen together with other aggregations, I can use: df.groupby("a").agg(b_var = ("b", "var"), c_sum = ("c", "sum")) My understanding is that to be able ... | As answered in Dask Discourse Forum, I don't think your custom Aggregation implementation is correct. However, a simpler solution can be applied: import dask.dataframe as dd import functools data = { "a": [1, 1, 1, 1, 2, 2, 2], "b": range(7), "c": range(10, 3, -1), } df = dd.from_dict(data, 2) var_ddof_2 = functools.pa... | 2 | 3 |
79,314,406 | 2024-12-28 | https://stackoverflow.com/questions/79314406/n-unique-aggregation-using-duckdb-relational-api | Say I have import duckdb rel = duckdb.sql('select * from values (1, 4), (1, 5), (2, 6) df(a, b)') rel Out[3]: ┌───────┬───────┐ │ a │ b │ │ int32 │ int32 │ ├───────┼───────┤ │ 1 │ 4 │ │ 1 │ 5 │ │ 2 │ 6 │ └───────┴───────┘ I can group by a and find the mean of 'b' by doing: rel.aggregate( [duckdb.FunctionExpression('m... | updated. I couldn't find proper way of doing count distinct, but you could use combination of array_agg() and array_unique() functions: rel.aggregate( [duckdb.FunctionExpression( 'array_unique', duckdb.FunctionExpression( 'array_agg', duckdb.ColumnExpression('b') ) )], group_expr='a', ) ┌────────────────────────────┐ ... | 2 | 1 |
79,314,321 | 2024-12-28 | https://stackoverflow.com/questions/79314321/use-an-expression-dictionary-to-calculate-row-wise-based-on-a-column-in-polars | I want to use an expression dictionary to perform calculations for a new column. I have this Polars dataframe: df=pl.DataFrame( "col1": ["a", "b", "a"], "x": [1,2,3], "y": [2,2,5] ) And I have an expression dictionary: expr_dict = { "a": pl.col("x") * pl.col("y"), "b": pl.col("x"), } I want to create a column where e... | pl.when() for conditional expression. pl.coalesce() to combine conditional expressions together. df.with_columns( r = pl.coalesce( pl.when(pl.col.col1 == k).then(v) for k, v in expr_dict.items() ) ) shape: (3, 4) ┌──────┬─────┬─────┬─────┐ │ col1 ┆ x ┆ y ┆ r │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞═══... | 1 | 2 |
79,310,142 | 2024-12-26 | https://stackoverflow.com/questions/79310142/how-to-extract-sub-arrays-from-a-larger-array-with-two-start-and-two-stop-1-d-ar | I am looking for a way to vectorize the following code, # Let cube have shape (N, M, M) sub_arrays = np.empty(len(cube), 3, 3) row_start = ... # Shape (N,) and are integers in range [0, M-2] row_end = ... # Shape (N,) and are integers in range [1, M-1] col_start = ... # Shape (N,) and are integers in range [0, M-2] col... | I believe this question is a duplicate of the one about Slicing along axis with varying indices. However, since it may not be obvious, I think it's okay to provide the answer in a new context with a somewhat different approach. From what I can see, you want to extract data from the cube using a sliding window of a fixe... | 3 | 1 |
79,313,103 | 2024-12-28 | https://stackoverflow.com/questions/79313103/asof-join-with-multiple-inequality-conditions | I have two dataframes: a (~600M rows) and b (~2M rows). What is the best approach for joining b onto a, when using 1 equality condition and 2 inequality conditions on the respective columns? a_1 = b_1 a_2 >= b_2 a_3 >= b_3 I have explored the following paths so far: Polars: join_asof(): only allows for 1 inequality... | Using Numba here is a good idea since the operation is particularly expensive. That being said, the complexity of the algorithm is O(n²) though it is not easy to do much better (without making the code much more complex). Moreover, the array b_1, which might not fit in the L3 cache, is fully read 5_000_000 times making... | 5 | 2 |
79,313,133 | 2024-12-28 | https://stackoverflow.com/questions/79313133/sqlalchemy-one-or-more-mappers-failed-to-initialize | I know this Question has been asked a lot and believe me I checked the answers and to me my code looks fine even tough it gives error so it's not. Basically, I was trying to set up a relationship between two Entities: User and Workout. from sqlalchemy import Integer,VARCHAR,TIMESTAMP from sqlalchemy.orm import mapped_c... | This is sort of a weird problem that I have not seen a perfect solution to. SQLAlchemy allows this "deferred" referencing of other models/etc by str name so that you don't end up with circular imports, ie. User must import Workout and Workout must import User. The problem that happens is that by not directly referencin... | 2 | 0 |
79,313,343 | 2024-12-28 | https://stackoverflow.com/questions/79313343/how-to-fix-setuptools-scm-file-finders-git-listing-git-files-failed | I am using pyproject.toml to build a package. I use setuptools_scm to automatically determine the version number. I use python version 3.11.2, setuptools 66.1.1 and setuptools-scm 8.1.0. Here are the relevant parts of pyproject.toml # For a discussion on single-sourcing the version, see # https://packaging.python.org/g... | python3 -m build builds in 2 phases: 1st it builds sdist and then it builds wheel from the sdist in an isolated environment where there is no .git directory. It doesn't matter because at the wheel building phase version is already set in sdist and build gets the version from sdist, not from setuptools_scm. In short: yo... | 1 | 1 |
79,313,112 | 2024-12-28 | https://stackoverflow.com/questions/79313112/combine-two-pandas-dataframes-side-by-side-with-resulting-length-being-maxdf1 | Essentially, what I described in the title. I am trying to combine two dataframes (i.e. df1 & df2) where they have different amounts of columns (df1=3, df2=8) with varying row lengths. (The varying row lengths stem from me having a script that breaks main two excel lists into blocks based on a date condition). My goal ... | Your issue arises because you are concatenating dataframes vertically rather than horizontally. To achieve the desired output, you need to align rows from df1 and df2 with the same index and then concatenate horizontally. Here’s the updated code that would produce the output you want. I have added comments on the place... | 4 | 3 |
79,312,644 | 2024-12-27 | https://stackoverflow.com/questions/79312644/extracting-substring-between-optional-substrings | I need to extract a substring which is between two other substrings. But I would like to make the border substrings optional - if no substrings found then the whole string should be extracted. patt = r"(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "bcdef") # d - as expected a = re.sub(patt, r"\1", "abcdefg") # adg - as ... | By making the bc and ef patterns optional, you'll get into situations where the one is matched, while the other is not. Yet, you'd need both of them or neither. The requirement that you need the whole input to match when these delimiters are not present really overcomplicates it. Realise that if there is no match, sub ... | 3 | 3 |
79,312,133 | 2024-12-27 | https://stackoverflow.com/questions/79312133/getting-all-leaf-words-reverse-stemming-into-one-python-list | On the same lines as the solution provided in this link, I am trying to get all leaf words of one stem word. I am using the community-contributed (@Divyanshu Srivastava) package get_word_forms Imagine I have a shorter sample word list as follows: my_list = [' jail', ' belief',' board',' target', ' challenge', ' command... | One solution using nested list comprehensions after stripping forgotten spaces: all_words = [setx for word in my_list for setx in get_word_forms(word.strip()).values() if len(setx)] # Flatten the list of sets all_words = [word for setx in all_words for word in setx] # Remove the repetitions and sort the set all_words =... | 1 | 1 |
79,313,107 | 2024-12-28 | https://stackoverflow.com/questions/79313107/how-to-have-pyright-infer-type-from-an-enum-check | Can types be associated with enums, so that Pyright can infer the type from an equality check? (Without cast() or isinstance().) from dataclasses import dataclass from enum import Enum, auto class Type(Enum): FOO = auto() BAR = auto() @dataclass class Foo: type: Type @dataclass class Bar: type: Type item = next(i for i... | You want a discriminated union (also known as tagged union). In a discriminated union, there exists a discriminator (also known as a tag field) which can be used to differentiate the members. You currently have an union of Foo and Bar, and you want to discriminate them using the .type attribute. However, this field can... | 2 | 2 |
79,312,774 | 2024-12-27 | https://stackoverflow.com/questions/79312774/inconsistent-url-error-in-django-from-following-along-to-beginner-yt-tutorial | As you can see in the first screenshot, /products/new isn't showing up as a valid URL although I followed the coding tutorial from YouTube exactly. For some reason there's a blank character before "new" but no blank space in the current path I'm trying to request. I don't know if that's normal or not. I'm using django ... | Add a trailing slash / to your URLpatterns to resolve this issue i.e. new/ and trending/. Also as mentioned in my comment, I would suggest you upgrade to a secure version of Django to access newer features. | 3 | 2 |
79,310,840 | 2024-12-27 | https://stackoverflow.com/questions/79310840/pil-generate-an-image-from-applying-a-gradient-to-a-numpy-array | I have a 2d NumPy array with values from 0 to 1. I want to turn this array into a Pillow image. I can do the following, which gives me a nice greyscale image: arr = np.random.rand(100,100) img = Image.fromarray((255 * arr).astype(np.uint8)) Now, instead of making a greyscale image, I'd like to apply a custom gradient.... | Method 1: vectorization of your code Your code is almost already vectorized. Almost all operations of it can work indifferently on a float or on an array of floats Here is a vectorized version def get_color_atArr(arr): assert (arr>=0).all() and (arr<=1).all() n=len(gradient) gradient.append(gradient[-1]) gradient=np.ar... | 2 | 2 |
79,311,978 | 2024-12-27 | https://stackoverflow.com/questions/79311978/how-can-i-optimize-python-code-for-analysis-of-a-large-sales-dataset | I’m working on a question where I have to process a large set of sales transactions stored in a CSV file and summarize the results. The code is running slower than expected and taking too much time for execution, especially as the size of the dataset increases. I am using pandas to load and process the data, are there ... | First of all, the df['category'] = np.select(...) line is slow because of the implicit conversion of all strings to a list of string objects. You can strongly speed this up by creating a categorical column rather than string-based one, since strings are inherently slow to compute. df['category'] = pd.Categorical.from_c... | 1 | 3 |
79,311,933 | 2024-12-27 | https://stackoverflow.com/questions/79311933/how-to-solve-multiple-and-nested-discriminators-with-pydantic-v2 | I am trying to validate Slack interaction payloads, that look like these: type: block_actions container: type: view ... type: block_actions container: type: message ... type: view_submission ... I use 3 different models for payloads coming to the same interaction endpoint: class MessageContainer(BaseModel): type: Li... | Not sure it's possible to use 2 discriminators to resolve one type (as you are trying to do). I can suggest you 3 options: 1. Split block_actions into block_message_actions and block_view_actions: from typing import Annotated, Literal from pydantic import BaseModel, Field, TypeAdapter class MessageContainer(BaseModel):... | 1 | 2 |
79,309,271 | 2024-12-26 | https://stackoverflow.com/questions/79309271/pandas-series-subtract-pandas-dataframe-strange-result | I'm wondering why pandas Series subtract a pandas dataframe produce such a strange result. df = pd.DataFrame(np.arange(10).reshape(2, 5), columns='a-b-c-d-e'.split('-')) df.max(axis=1) - df[['b']] What are the steps for pandas to produce the result? b 0 1 0 NaN NaN NaN 1 NaN NaN NaN | By default an operation between a DataFrame and a Series is broadcasted on the DataFrame by column, over the rows. This makes it easy to perform operations combining a DataFrame and aggregation per column: # let's subtract the DataFrame to its max per column df.max(axis=0) - df[['b']] a b c d e b NaN 5 NaN NaN NaN 1 Na... | 1 | 1 |
79,310,713 | 2024-12-27 | https://stackoverflow.com/questions/79310713/how-to-apply-the-capitalize-with-condition | I'm wondering how to use the capitalize function when another column has a specific value. For example, I want to change the first letter of students with Master's degree. # importing pandas as pd import pandas as pd # creating a dataframe df = pd.DataFrame({ 'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'], 'B': ['Mas... | Here is the complete code: import pandas as pd # Creating the DataFrame df = pd.DataFrame({ 'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'], 'B': ['Masters', 'Graduate', 'Graduate', 'Masters', 'Graduate'], 'C': [27, 23, 21, 23, 24] }) # Capitalize column A conditionally based on B df['A'] = df.apply(lambda row: row['A... | 1 | 1 |
79,309,886 | 2024-12-26 | https://stackoverflow.com/questions/79309886/parsing-units-out-of-column | I've got some data I'm reading into Python using Pandas and want to keep track of units with the Pint package. The values have a range of scales, so have mixed units, e.g. lengths are mostly meters but some are centimeters. For example the data: what,length foo,5.3 m bar,72 cm and I'd like to end up with the length co... | Going through the examples, it looks like pint_pandas is expecting numbers rather than strings. You can use apply to do the conversion: from pint import UnitRegistry ureg = UnitRegistry() df["length"].apply(lambda i: ureg(i)).astype("pint[m]") However, why keep the column as Quantity objects instead of just plain floa... | 1 | 2 |
79,309,190 | 2024-12-26 | https://stackoverflow.com/questions/79309190/numpy-convention-for-storing-time-series-of-vectors-and-matrices-items-in-rows | I'm working with discrete-time simulations of ODEs with time varying parameters. I have time series of various data (e.g. time series of state vectors generated by solve_ivp, time series of system matrices generated by my control algorithm, time series of system matrices in modal form, and so on). My question: in what ... | This is strongly dependent of the algorithms applied on your dataset. This problem is basically known as AoS versus SoA. For algorithm that does not benefit much from SIMD operations and accessing all fields, AoS can be better, otherwise SoA is often better. The optimal data structure is often AoSoA, but it is often a ... | 1 | 2 |
79,309,025 | 2024-12-26 | https://stackoverflow.com/questions/79309025/why-does-summing-data-grouped-by-df-iloc-0-also-sum-up-the-column-names | I have a DataFrame with a species column and four arbitrary data columns. I want to group it by species and sum up the four data columns for each one. I've tried to do this in two ways: once by grouping by df.columns[0] and once by grouping by df.iloc[:, 0]. data = { 'species': ['a', 'b', 'c', 'd', 'e', 'rt', 'gh', 'ed... | In groupby - column name is treated as an intrinsic grouping key, while a Series is treated as an external key. Reference - https://pandas.pydata.org/docs/reference/groupby.html When using df.iloc[:, 0]: Pandas considers the string values in the species column as a separate grouping key independent of the DataFrame str... | 2 | 0 |
79,308,731 | 2024-12-26 | https://stackoverflow.com/questions/79308731/safest-way-to-incrementally-append-to-a-file | I'm performing some calculations to generate chaotic solutions to a mathematical function. I have an infinite loop that looks something like this: f = open('solutions.csv', 'a') while True: x = generate_random_parameters() # x is a list of floats success = test_parameters(x) if success: print(','.join(map(str, x)), fil... | One simple approach to ensuring that the current call to print finishes before the program exits from a keyboard interrupt is to use a signal handler to unset a flag on which the while loop runs. Set the signal handler only when you're about to call print and reset the signal handler to the original when print returns,... | 3 | 2 |
79,307,295 | 2024-12-25 | https://stackoverflow.com/questions/79307295/what-is-the-best-way-to-avoid-detecting-words-as-lines-in-opencv-linedetector | I am using OpenCV LineDetector class in order to parse tables. However, I face an issue when I try to detect lines inside the table. for the following image: I use img = cv2.imread(TABLE_PATH) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) lsd = cv2.createLineSegmentDetector(cv2.LSD_REFINE_ADV, sigma_scale=0.6) dlines =... | You got some lines detected, but that set contained some undesirable ones. You could just filter the set of lines for line length. If you do that, you can easily exclude the very short lines coming from the text in that picture. Implementation: that's a list comprehension, only including lines that are long enough. Wri... | 3 | 0 |
79,332,328 | 2025-1-6 | https://stackoverflow.com/questions/79332328/pydantic-model-how-to-exclude-field-from-being-hashed-eq-compared | I have the following hashable pydantic model: class TafReport(BaseModel, frozen=True): download_date: dt icao: str issue_time: dt validity_time_start: dt validity_time_stop: dt raw_report: str Now I don't want these reports to be considered different just because their download date is different (I insert that with th... | Unfortunately there is no built-in option at the moment, but there are two options that you can consider: Changing from BaseModel to a Pydantic dataclass: from dataclasses import field from datetime import datetime as dt from pydantic import TypeAdapter from pydantic.dataclasses import dataclass @dataclass(frozen=True)... | 5 | 1 |
79,336,604 | 2025-1-7 | https://stackoverflow.com/questions/79336604/failed-creating-mock-folders-with-pyfakefs | I'm working on a project that uses pyfakefs to mock my filesystem to test folder creation and missing folders in a previously defined tree structure. I'm using Python 3.13 on Windows and get this output from the terminal after running my test: Terminal output: (Does anyone have a tip for formatting terminal output with... | The issue has been acknowledged, fixed, and the fix has been included in the 5.7.4 release of pyfakefs. No workaround should thus be necessary, any longer. | 1 | 1 |
79,321,826 | 2025-1-1 | https://stackoverflow.com/questions/79321826/seleniumbase-cdp-mode-opening-new-tabs | I am currently writing a python program which uses a seleniumbase web bot with CDP mode activated: with SB(uc=True, test=True, xvfb=True, incognito=True, agent=<user_agent>, headless=True) as sb: temp_email_gen_url = "https://temp-mail.org/en" sb.activate_cdp_mode(temp_email_gen_url) ... I need to be able to create ne... | For better or worse there isn't an "open tab" feature in CDP mode. The main developer of seleniumbase suggests using a separate driver in CDP mode for each tab as follows, equivalent to using "open in new window" on every link: from seleniumbase import SB # opens all links on the target page with a second driver with S... | 1 | 1 |
End of preview. Expand in Data Studio
Description
- GitHub repository: https://github.com/EshanJayasundara/Stackoverflow-Python-Q-and-A-Extractor.
- GitHub repository contains the automated workflow for extracting the question and answer pairs from Stackoverflow.
- This dataset contains the question-answer pairs extracted from Stackoverflow using Stack Exchange API v2.3 and used following endpoints,
- From 2020 January 1 to Today
1. Dataset description,
- Contains only
pythontagged question-answer pairs. - Each question has a vote greater tan or equal to 1.
- Only contains the questions that have accepted answers and the corresponding accepted answers.
- Can contain accepted answers(~30) with negative votes.
2. Column description,
question_id: id came from Stackoverflow.creation_date: the date when the question was created.link: link to the Stackoverflow page corresponding to that question-answer pair.question: question textaccepted_answer: accepted answer textquestion_vote: score/vote given for thequestionby the Stackoverflow community.answer_vote: score/vote given for theaccepted_answerby the Stackoverflow community.
- Contains only
- Downloads last month
- 16