Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Lib and let die
Aug 26, 2004

Dr Subterfuge posted:

Hahaha what an abomination of a sheet. At least the values are always stored in the first cell of a group of merged cells.


Protocol7 posted:

The fact that a spreadsheet like that is possible is both fascinating and frightening.

Par for the course, I assure you.

Adbot
ADBOT LOVES YOU

death cob for cutie
Dec 30, 2006

dwarves won't delve no more
too much splatting down on Zot:4
I assume the cells are merged like that to permit other parts of the document to have cells be merged in other, more horrifying amalgations? gently caress people who don't use spreadsheets for their intended purpose, IMO

Lib and let die
Aug 26, 2004

I honestly have no idea what the logic is here, if there is any. I've tried to untangle it and figure out what the gently caress they thought they were accomplishing with this layout but the more I unravel it the less sense it makes. There's absolutely no reason to hide empty columns B-G and then start the table in column H...just use column B!

I'll admit, I've used Excel as a janky, one-off database my fair share of times, but this is just dizzying.

Zoracle Zed
Jul 10, 2001
so... sorry if you already mentioned this, but you have proposed to the powers-that-be that they may want to reconsider their Supremely hosed Data Serialization Format?

Lib and let die
Aug 26, 2004

On this specific issue? No. I've pressed them on things like cleaning up records so that quoting tools don't pull up ancient systems that the customer upgraded from and landfilled 10 years ago, pressed them on their vague and inconsistent verbiage (in one tool it's a 'QRN Order', in another tool it's an 'SAP order', and in yet another tool, it's an 'LAC'.

I've worked with this vendor on and off for almost a decade and they've improved absolutely nothing - in fact, they've actually made it worse.

Zoracle Zed
Jul 10, 2001
oh man, that's brutal. my sympathies

QuarkJets
Sep 8, 2008

Zoracle Zed posted:

so... sorry if you already mentioned this, but you have proposed to the powers-that-be that they may want to reconsider their Supremely hosed Data Serialization Format?

OP just needs to use pySFDSF

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?

QuarkJets posted:

OP just needs to use pySFDSF

Take screenshots of the spreadsheet and train a neural net.

Lib and let die
Aug 26, 2004

DoctorTristan posted:

Take screenshots of the spreadsheet and train a neural net.

CONCLUSION: HUMANITY IS ITS OWN GREATEST ENEMY. PROTECT HUMANS FROM HUMANS. ERROR. ERROR. ERROR.

e: don't want to doublepost, however I just stumbled upon xlwings, this may have some promise.

Lib and let die fucked around with this message at 15:47 on Sep 4, 2020

Zoracle Zed
Jul 10, 2001

Marx Was A Lib posted:

e: don't want to doublepost, however I just stumbled upon xlwings, this may have some promise.

I'm thinking now that if you're to have any hope of a maintainable solution, you're going to want to model the data source as correctly as possible as an Excel spreadsheet, rather than a text stream you're doing regexes on, or trying to find a quick intermediate conversion to pandas, etc. My primary concern would be that for any python/xls interface, you're inevitably going to run in to some insane excel quirks/bugs/corner cases, and ideally you'd fix those at the level of the xls interface library, avoiding as much as possible the writing of SFDSF-specific kludges. If xlwings is a commercial/closed-source product, this avenue might be closed to you. Just a thought!

Zoracle Zed fucked around with this message at 16:21 on Sep 4, 2020

abelwingnut
Dec 23, 2002


Marx Was A Lib posted:

e: don't want to doublepost, however I just stumbled upon xlwings, this may have some promise.

relatedly, has anyone used this: https://developers.google.com/sheets/api/quickstart/python

and is it decent?

accipter
Sep 12, 2003
I have used xlwings for work with Excel and it is quite nice when tools like pyexcel cannot handle it.

12 rats tied together
Sep 7, 2006

abelwingnut posted:

relatedly, has anyone used this: https://developers.google.com/sheets/api/quickstart/python

and is it decent?

I worked at a place where someone did a hack week project to run a rails app off of google sheets via active record, or something. They were calling it Spreadsheets as a Database (SaaD). It seemed to work decently and I have had a good impression of google sheets since.

CarForumPoster
Jun 26, 2013

⚡POWER⚡
I used a google sheet for enriching our first ~2000 web scraped leads before importing into a CRM. Worked fine but got real slow compared to a free tier RDS.

Bad Munki
Nov 4, 2008

We're all mad here.


Running a flask app on elastic beanstalk. The service in question handles a really wide variety of tasks, from simply coalescing & converting data from another (external) service from one format to another, to converting uploaded files, to running some fairly intense bulk math and returning the results. Some of these requests may last less than a second, some may run for minutes, possibly even hours. The specific endpoint doesn't really equate too strongly to runtime, it's pretty varied. The traffic is also super spikey, although I have a pretty strong hunch there may be trends re: number of types of requests vs. time of day.

I'd really like to get a sense of total request time from before_request to teardown_request (or after_request) compared against CPU time, IO time, idle time, etc., per request. Any recommended tools to do this? Like, is there a good library I could drop into this thing that would let me start gathering those metrics during pre-flight, continue to gather them for the duration of the request, and then wrap them up and log them out during teardown, but without impacting performance in a serious way?

The short version of the why here is that because of the extremely varied nature of the traffic, it's hard to get a good feel for how best to configure this environment as far as what level of instance to run, how wide to scale them, how many processes and threads to put on each, etc. The tools built in to beanstalk just seem to coarse for what I need at this point.

Bad Munki fucked around with this message at 03:35 on Sep 5, 2020

Tuxedo Gin
May 21, 2003

Classy.

Not sure if this is a python issue, a postgres issue, or a hosting issue, but maybe somebody here can help me figure it out:

I have a simple python app that monitors a website/service and posts entries to a database. It has been working flawlessly on my Raspberry Pi test server for ~5 months (to the tune of 176k entries in the DB). I've decided to deploy it to my shared hosting webserver. I migrated the database and the app, and the app runs, but when it tries to insert into the DB I get: psycopg2.OperationalError: FATAL: Peer authentication failed for user "<DB owner username>"

I've tried another user that also has permissions on the DB, but get the same result. I'm assuming there is something different about how psycopg2 logs into databases when it's on a shared hosting server as opposed to a simple server I control. The DB is fine, I can manage the DB remotely and locally (through the hosting service's panel) using a terminal or PgAdmin/PhpPgAdmin and haven't run into any permissions issues. Any idea what I might need to do differently in python/psycopg2 to connect to the DB when on a hosting server rather than a Pi?

Fluue
Jan 2, 2008

Bad Munki posted:

Running a flask app on elastic beanstalk. The service in question handles a really wide variety of tasks, from simply coalescing & converting data from another (external) service from one format to another, to converting uploaded files, to running some fairly intense bulk math and returning the results. Some of these requests may last less than a second, some may run for minutes, possibly even hours. The specific endpoint doesn't really equate too strongly to runtime, it's pretty varied. The traffic is also super spikey, although I have a pretty strong hunch there may be trends re: number of types of requests vs. time of day.

I'd really like to get a sense of total request time from before_request to teardown_request (or after_request) compared against CPU time, IO time, idle time, etc., per request. Any recommended tools to do this? Like, is there a good library I could drop into this thing that would let me start gathering those metrics during pre-flight, continue to gather them for the duration of the request, and then wrap them up and log them out during teardown, but without impacting performance in a serious way?

The short version of the why here is that because of the extremely varied nature of the traffic, it's hard to get a good feel for how best to configure this environment as far as what level of instance to run, how wide to scale them, how many processes and threads to put on each, etc. The tools built in to beanstalk just seem to coarse for what I need at this point.

Sounds like you're looking for tracing solutions. This is usually done with a per-route middleware or application-wide middleware.

Not sure what packages would be best for this, though. Maybe something from OpenTracing if you want to roll-your-own and consume the logs yourself. If you're fully invested in AWS, the X-Ray Daemon and Python integration can get you what you're looking for as well.

shoeberto
Jun 13, 2020

which way to the MACHINES?

Tuxedo Gin posted:

Any idea what I might need to do differently in python/psycopg2 to connect to the DB when on a hosting server rather than a Pi?
How are you authenticating with psycopg2?

Peer authentication is usually how Postgres will default to logging in when you don't explicitly provide a username and password. It's a bit tricky to use peer auth and personally I don't prefer it.

This SO post gives a very brief discussion on configuring auth methods for Postgres that might help:
https://www.crumbblog.com/blueberry-mint-fizz-muddle-cocktail/?utm_campaign=yummly&utm_medium=yummly&utm_source=yummly

That is the wrong link because my phone is dumb as gently caress but it looks like a tasty cocktail anyways. Here is the real link:
https://dba.stackexchange.com/a/83233

Bad Munki
Nov 4, 2008

We're all mad here.


shoeberto posted:

https://www.crumbblog.com/blueberry-mint-fizz-muddle-cocktail/?utm_campaign=yummly&utm_medium=yummly&utm_source=yummly

That is the wrong link because my phone is dumb as gently caress but it looks like a tasty cocktail anyways.

Yo I was gonna make this and report back, for the sake of the thread, but there’s literally no booze in the recipe, what is this, the Swift thread?

shoeberto
Jun 13, 2020

which way to the MACHINES?

Bad Munki posted:

Yo I was gonna make this and report back, for the sake of the thread, but there’s literally no booze in the recipe, what is this, the Swift thread?
:lol:

They do say you can add gin! They just claim it is optional which is a god drat lie.

death cob for cutie
Dec 30, 2006

dwarves won't delve no more
too much splatting down on Zot:4
I have what's more or less an academic question: when doing unit testing (via unittest, pytest, nose2, w/e) you're primarily testing based on the output of the function. There's no simple way to test what a function prints to the console rather than what it returns, right? Simple in this case as "able to be explained to a novice programmer, in a way they could grasp without significant difficulty".

(This question is pretty dumb, here's the background: I teach at a code bootcamp and want to introduce unit testing to students sooner so they're not making GBS threads themselves when asked about it during a job interview, and most of our early sample functions we have them write are still occasionally printing stuff. I need like, one more reason to cut as many print statements from the curriculum as soon as possible and this could be the tipping point - if a function relies on a print statement to work properly, and we can't test if it works properly, then it's dumb to be using print() while teaching unit tests.)

shoeberto
Jun 13, 2020

which way to the MACHINES?
It's certainly not a dumb question, this is one of those weird things that comes up. I do this frequently in PHP to debug API requests since ultimately the response is just "printed" by the web server back to the client.

I haven't done this in Python but according to SO, there is a module to redirect stdout to a buffer in memory:
https://stackoverflow.com/a/57859075
I believe this is what you're looking for - redirect stdout to a buffer, run the code, then read the text out of the buffer.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Epsilon Plus posted:

I have what's more or less an academic question: when doing unit testing (via unittest, pytest, nose2, w/e) you're primarily testing based on the output of the function. There's no simple way to test what a function prints to the console rather than what it returns, right? Simple in this case as "able to be explained to a novice programmer, in a way they could grasp without significant difficulty".

(This question is pretty dumb, here's the background: I teach at a code bootcamp and want to introduce unit testing to students sooner so they're not making GBS threads themselves when asked about it during a job interview, and most of our early sample functions we have them write are still occasionally printing stuff. I need like, one more reason to cut as many print statements from the curriculum as soon as possible and this could be the tipping point - if a function relies on a print statement to work properly, and we can't test if it works properly, then it's dumb to be using print() while teaching unit tests.)

Unit tests are code. When you write a unit test, it invokes a function, and looks at the return value in order to be able to assert. Your functions generally need to return values of some kind in order to be unit testable in a useful manner. Print statements are void methods - they don't return values, and that's why unit tests can't see methods printed to the screen.

That said, there's nothing wrong with having a print statement in your code - it's just that if your function doesn't return a value, WTF is the calling code supposed to do?

Wallet
Jun 19, 2006

Epsilon Plus posted:

I need like, one more reason to cut as many print statements from the curriculum as soon as possible and this could be the tipping point - if a function relies on a print statement to work properly, and we can't test if it works properly, then it's dumb to be using print() while teaching unit tests.)
I'm drawing a blank on what a function looks like that should and does rely on a print statement to work properly.

NinpoEspiritoSanto
Oct 22, 2013




Functions should give you the data you want to print, not print it for you. No reason to test internals themselves.

QuarkJets
Sep 8, 2008

What you really want is to test the contents of a string that's being passed to print. Define a function that builds that string, and then unit test that

nullfunction
Jan 24, 2005

Nap Ghost
Or with pytest, use the capsys fixture

Asleep Style
Oct 20, 2010

So I have a gitlab CI pipeline that runs a suite of unit tests using pytest. Right now this is set up so that pytest will generate a test report in junit xml format, which gitlab saves as an artifact and presents nicely on the pipeline page. I'd like to generate a report that links official requirements to the unit tests that cover those requirements, but I'm not sure what the best way to do that in python is.

In the past using C# I was able to do this with function attributes: test methods would be decorated with something like [Covers(REQ-LOGGING-001)].

The absolute easiest way would be to add covered requirements to the test method names, but that gets unreadable quickly, especially when one test might cover multiple requirements.

It would be convenient if pytest could include this as part of the output report, but that's not necessary. I'm fine with having the pytest report that lists passing tests and another report that says requirement X is covered by test Y.

Does anyone have any suggestions for libraries or tools that would be useful here?

Wallet
Jun 19, 2006

Asleep Style posted:

The absolute easiest way would be to add covered requirements to the test method names, but that gets unreadable quickly, especially when one test might cover multiple requirements.

This is a shot in the dark (as far as how you'd get it into the report) but you could always use marks to identify tests by the requirement they fulfill.

Asleep Style
Oct 20, 2010

This may be a crime against best practices, but I was able to do this very easily by using test functions' docstrings to list covered requirements.

1. Add covered reqs to test function docstrings
2. Generate html documentation for my tests module with pdoc
3. Capture the html output of pdoc as an artifact in gitlab. Gitlab makes this available in the browser from the pipeline page and I'm done

I'm still open to suggestions for more correct ways to do this, but I'm happy I got this hack running in an afternoon

abelwingnut
Dec 23, 2002


i'm totally new to list comprehensions, so please forgive me if this is dumb.

i am trying to create a list comprehension and i am having a hell of a time with it. basically, i have:

list1 = [ ['a', 'b', 'c'] , ['d', 'e', 'f'] , [ 'g', 'h', 'i'] ]

so this is a list of lists. each sublist is a row of data imported from a csv.

--

list2 = [ 'j', 'k', 'l' ]

each of the elements here is a string representing a category. i want to add each category to each sublist. so i am trying to get to...

[ 'a', 'b', 'c', 'j'] , [ 'a', 'b', 'c', k' ] , [ 'a', 'b', 'c', 'l' ] , [ 'd', 'e', 'f', 'j' ], ...

--

list3 = [ 'm', 'n', 'o', 'p', 'q' ]

each of the elements here is a subcategory. i want to add element here to each of the new lists from directly above. so...

[ 'a', 'b', 'c', 'j', 'm' ] , [ 'a', 'b', 'c', 'j', 'n' ] , [ 'a', 'b', 'c', 'j', 'o' ] , [ 'a', 'b', 'c', 'j', 'p' ] , [ 'a', 'b', 'c', 'j', 'q' ] , [ 'a', 'b', 'c', 'k', 'm' ] , ...

which, in nested for loop land, would look like:

for x in list1:

for y in list2:

for z in list3:

<do stuff>

simple enough.

however, i'm having trouble getting from list1 to list2 via a list comprehension. basically i can't seem to append that category to the sublists. any ideas? what should it look like?

i have, effectively:

listLocation = [ x.append(y) for x in list1 for y in list2]

OnceIWasAnOstrich
Jul 22, 2006

abelwingnut posted:

i have, effectively:

listLocation = [ x.append(y) for x in list1 for y in list2]

Whether or not this is the best way to do this, you can do:

Python code:
listLocation = [ x + [y] for x, y in zip(list1, list2) ]

CarForumPoster
Jun 26, 2013

⚡POWER⚡

abelwingnut posted:

i'm totally new to list comprehensions, so please forgive me if this is dumb.

i am trying to create a list comprehension and i am having a hell of a time with it. basically, i have:

code:
list1 = [ ['a', 'b', 'c'] , ['d', 'e', 'f'] , [ 'g', 'h', 'i'] ]
so this is a list of lists. each sublist is a row of data imported from a csv.

--

code:
list2 = [ 'j', 'k', 'l' ]
each of the elements here is a string representing a category. i want to add each category to each sublist. so i am trying to get to...

code:
[ 'a', 'b', 'c', 'j'] , [ 'a', 'b', 'c', k' ] , [ 'a', 'b', 'c', 'l' ] , [ 'd', 'e', 'f', 'j' ], ...
--

code:
list3 = [ 'm', 'n', 'o', 'p', 'q' ]
each of the elements here is a subcategory. i want to add element here to each of the new lists from directly above. so...

code:
[ 'a', 'b', 'c', 'j', 'm' ] , [ 'a', 'b', 'c', 'j', 'n' ] , [ 'a', 'b', 'c', 'j', 'o' ] , [ 'a', 'b', 'c', 'j', 'p' ] , [ 'a', 'b', 'c', 'j', 'q' ] , [ 'a', 'b', 'c', 'k', 'm' ] , ...
which, in nested for loop land, would look like:
code:
for x in list1:

     for y in list2:

          for z in list3:

               <do stuff>
simple enough.

however, i'm having trouble getting from list1 to list2 via a list comprehension. basically i can't seem to append that category to the sublists. any ideas? what should it look like?

i have, effectively:

code:
listLocation = [ x.append(y) for x in list1 for y in list2]

Your post is hard to read without [ code ] blocks so I added them.

Is this a homework assignment? My thought on list comprehensions is they're great when they make simpler, easier to maintain and read code. When you start doing much nesting, don't do it.

Data Graham
Dec 28, 2009

📈📊🍪😋



This is a list incomprehension

abelwingnut
Dec 23, 2002


sorry. kind of forgot the exist.

in any case, this isn't homework. just trying to figure out how to combine all these loops into one. i somehow thought it'd be cleaner, but it seems like it's getting messy. there are also conditions on the first and second loops. it's ugly.

Zugzwang
Jan 2, 2005

You have a kind of sick desperation in your laugh.


Ramrod XTreme
Is this what you're looking for? You only wanted one element at a time from each of the later lists (list2, list3) added to each of the lists in list1, right? Not very pretty, but it seems to have gotten to what you're hinting at:
code:
from itertools import product

def add_element_to_lists(list_of_lists, list_of_elements):
    output_list = []
    for working_list, element in product(list_of_lists, list_of_elements):
        temp_list = working_list.copy()
        temp_list.append(element)
        output_list.append(temp_list)
    return output_list

for element_list in [list2, list3]:
    list1 = add_element_to_list(list1, element_list)
Results in
code:
[['A', 'B', 'C', 'J', 'M'],
['A', 'B', 'C', 'J', 'N'],
...
['G', 'H', 'I', 'L', 'Q']]

Zugzwang fucked around with this message at 03:08 on Sep 24, 2020

QuarkJets
Sep 8, 2008

Zugzwang posted:

Is this what you're looking for? You only wanted one element at a time from each of the later lists (list2, list3) added to each of the lists in list1, right? Not very pretty, but it seems to have gotten to what you're hinting at:
code:
from itertools import product

def add_element_to_lists(list_of_lists, list_of_elements):
    output_list = []
    for working_list, element in product(list_of_lists, list_of_elements):
        temp_list = working_list.copy()
        temp_list.append(element)
        output_list.append(temp_list)
    return output_list

for element_list in [list2, list3]:
    list1 = add_element_to_list(list1, element_list)
Results in
code:
[['A', 'B', 'C', 'J', 'M'],
['A', 'B', 'C', 'J', 'N'],
...
['G', 'H', 'I', 'L', 'Q']]

It sounds like they specifically want to use a list comprehension, I guess as a complex exercise. I had trouble interpreting what the OP wants but here's my guess:

Python code:
list1 = [ ['a', 'b', 'c'] , ['d', 'e', 'f'] , [ 'g', 'h', 'i'] ]
list2 = [ 'j', 'k', 'l' ]
list3 = [ 'm', 'n', 'o', 'p', 'q' ]

l12 = [l1 + [l2] for l1 in list1 for l2 in list2]
l123 = [l1 + [l2, l3] for l1 in list1 for l2 in list2 for l3 in list3]

print(l12)
print(l123)
The first print:
[['a', 'b', 'c', 'j'], ['a', 'b', 'c', 'k'], ['a', 'b', 'c', 'l'], ['d', 'e', 'f', 'j'], ['d', 'e', 'f', 'k'], ['d', 'e', 'f', 'l'], ['g', 'h', 'i', 'j'], ['g', 'h', 'i', 'k'], ['g', 'h', 'i', 'l']]

The second print:
[['a', 'b', 'c', 'j', 'm'], ['a', 'b', 'c', 'j', 'n'], ['a', 'b', 'c', 'j', 'o'], ['a', 'b', 'c', 'j', 'p'], ['a', 'b', 'c', 'j', 'q'], ['a', 'b', 'c', 'k', 'm'], ['a', 'b', 'c', 'k', 'n'], ['a', 'b', 'c', 'k', 'o'], ['a', 'b', 'c', 'k', 'p'], ['a', 'b', 'c', 'k', 'q'], ['a', 'b', 'c', 'l', 'm'], ['a', 'b', 'c', 'l', 'n'], ['a', 'b', 'c', 'l', 'o'], ['a', 'b', 'c', 'l', 'p'], ['a', 'b', 'c', 'l', 'q'], ['d', 'e', 'f', 'j', 'm'], ['d', 'e', 'f', 'j', 'n'], ['d', 'e', 'f', 'j', 'o'], ['d', 'e', 'f', 'j', 'p'], ['d', 'e', 'f', 'j', 'q'], ['d', 'e', 'f', 'k', 'm'], ['d', 'e', 'f', 'k', 'n'], ['d', 'e', 'f', 'k', 'o'], ['d', 'e', 'f', 'k', 'p'], ['d', 'e', 'f', 'k', 'q'], ['d', 'e', 'f', 'l', 'm'], ['d', 'e', 'f', 'l', 'n'], ['d', 'e', 'f', 'l', 'o'], ['d', 'e', 'f', 'l', 'p'], ['d', 'e', 'f', 'l', 'q'], ['g', 'h', 'i', 'j', 'm'], ['g', 'h', 'i', 'j', 'n'], ['g', 'h', 'i', 'j', 'o'], ['g', 'h', 'i', 'j', 'p'], ['g', 'h', 'i', 'j', 'q'], ['g', 'h', 'i', 'k', 'm'], ['g', 'h', 'i', 'k', 'n'], ['g', 'h', 'i', 'k', 'o'], ['g', 'h', 'i', 'k', 'p'], ['g', 'h', 'i', 'k', 'q'], ['g', 'h', 'i', 'l', 'm'], ['g', 'h', 'i', 'l', 'n'], ['g', 'h', 'i', 'l', 'o'], ['g', 'h', 'i', 'l', 'p'], ['g', 'h', 'i', 'l', 'q']]

OP, basically the rule is that the object on the left side of the for loops is what's going to be appended during each iteration of the list comprehension. `x.append(y)` returns `None`, which isn't what you want to put in the list, so that doesn't work. You need some operator that returns the desired list; concatenation does that.

abelwingnut
Dec 23, 2002


thanks for all the help--it's working. my logic for the loops was correct, but i was trying to add strings to lists in an incompatible way.

thanks, again!

Jose Cuervo
Aug 25, 2004
I have a list of timestamps which represent the number of seconds since January 1st 2008. I would like to convert these into a pandas series of date times (e.g., 2020-09-27 22:13:56).

Looking through stack overflow I have found this question which uses the datetime library to convert a timestamp in seconds to a "readable" timestamp. However the datetime.datetime.fromtimestamp() function seems to be defined as seconds since January 1st 1970.

Is converting my timestamp using the datetime.datetime.fromtimestamp() as is, and then adding on the number of days between Jan 1st 1970 and Jan 1st 2008 (via pandas.DateOffset(days=num_days) where num_days is the number of days between the two dates) the best way to go about this?

Adbot
ADBOT LOVES YOU

Bad Munki
Nov 4, 2008

We're all mad here.


Logically, it may be clearer to find the offset of your epoch from the standard epoch and add that number of seconds to the times you have, and then convert THAT to a standard datetime object, just because then you're not calculating a known-to-be-wrong datetime at any point, but tomato tomato.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply