Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
spiritual bypass
Feb 19, 2008

Grimey Drawer
One question is how do you set up a game server manually? Do you have an image for each type of server? Are these cloud servers? Does your preferred host have an API for setting up new servers?

Adbot
ADBOT LOVES YOU

AgentCow007
May 20, 2004
TITLE TEXT

rt4 posted:

One question is how do you set up a game server manually? Do you have an image for each type of server? Are these cloud servers? Does your preferred host have an API for setting up new servers?

It's just one type of server. I usually just use bash commands and edit configs, but I could probably make an image of it. I was originally thinking about using AWS which definitely has APIs and everything can be on-demand but its expensive and I was wondering if there's a better way. My cheaper hosting doesn't appear to have an API but I could shop around.

OtspIII
Sep 22, 2002

I've been working on a sort of living rulebook for an RPG in React/Express/MongoDB for a bit now, and the a proof of concept of the core functionality is pretty much done, and my next stem requires me to figure out how user accounts are going to work. I've realized that there's a pretty big question I don't know the answer to in how apps like this are structured.

Given that I'm modeling this after a client/server/db pattern that uses three different ports, how hard would it be to package this as a stand-alone program that could be downloaded and used without needing an internet connection? Is there some framework or bundler that could take the three different processes running and put them behind a single clean access point? An ideal version of this would probably boot up in its own window, but something that just loads as a Chrome tab could probably work as well.

The norm with RPG products like this is that they tend to just be a pdf file, but I wanted to make an equivalent that was a bit more interactive. I could just make this a website that you log in to, but I'm worried that requiring an internet connection excludes too many use-cases for this type of thing.

The Fool
Oct 16, 2003


OtspIII posted:

I've been working on a sort of living rulebook for an RPG in React/Express/MongoDB for a bit now, and the a proof of concept of the core functionality is pretty much done, and my next stem requires me to figure out how user accounts are going to work. I've realized that there's a pretty big question I don't know the answer to in how apps like this are structured.

Given that I'm modeling this after a client/server/db pattern that uses three different ports, how hard would it be to package this as a stand-alone program that could be downloaded and used without needing an internet connection? Is there some framework or bundler that could take the three different processes running and put them behind a single clean access point? An ideal version of this would probably boot up in its own window, but something that just loads as a Chrome tab could probably work as well.

The norm with RPG products like this is that they tend to just be a pdf file, but I wanted to make an equivalent that was a bit more interactive. I could just make this a website that you log in to, but I'm worried that requiring an internet connection excludes too many use-cases for this type of thing.

It gets some hate, but check out Electron.

You’ll need to figure out a way to package your dB component separately though.

OtspIII
Sep 22, 2002

The Fool posted:

It gets some hate, but check out Electron.

You’ll need to figure out a way to package your dB component separately though.

Is that an issue with all styles of DB or specifically MongoDB? I've been intentionally keeping the ways the app interacts with the database pretty bottlenecked to make it easy to swap out data storage systems if needed.

The Fool
Oct 16, 2003


OtspIII posted:

Is that an issue with all styles of DB or specifically MongoDB? I've been intentionally keeping the ways the app interacts with the database pretty bottlenecked to make it easy to swap out data storage systems if needed.

Yeah.

Electron would handle the React/Express components of your app, but any db you use will be handled separately.

Since your goal is to deploy this to people to use offline, you may want to consider sqlite instead. You wouldn't have to package a full featured db environment with your application.

fankey
Aug 31, 2001

snip

fankey fucked around with this message at 15:38 on Aug 26, 2020

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
It'd help to know exactly what your use case and requirements are.

My read is that you have an existing signal path that carries 24-bit 44kHz digital audio (and only that), and you're wanting to transparently add encryption by plugging in some boxes at each end. You don't care about signal latency. You do want to detect if the data is corrupted in transmission. You do want to be able to recover if part of the transmission is corrupted.

If you're willing to downsample the audio, my feeling is that you can just do that, use a proper encryption scheme such as gcm, and use the saved bits to carry the extra metadata needed (iv and authentication tag). Using your example of 64 samples and shaving two bits per sample, that gives you an extra 128 bits to work with for a 96-bit authentication tag and 32-bit IV for each block of underlying samples. On initial startup, you can just try to decrypt every series of 64 samples, and start outputting once the MAC succeeds. That gives you proper security guarantees with the same overall quality as your planned scheme.

fankey
Aug 31, 2001

snip

fankey fucked around with this message at 15:38 on Aug 26, 2020

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

fankey posted:

I'm trying to come up with a scheme to encrypt a stream 24bit pcm audio without adding any additional data to the stream. Assuming that both sides agree on the AES key through some other mechanism is there a way to accomplish this? A few possible gotchas

I think everybody else so far has posted cool and good things so I'm just thinking out loud. I was thinking of maybe embedding a sine wave below what can be recorded for verification, but I guess that doesn't do anything for what happens to data loss midway and you've lost continuity. So I guess you need that extra bit of metadata to anchor your stream like you have i-frames for video.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
You use gcm because it's a "batteries included" encryption scheme - there are a lot of ways to mess up aes+hmac that aren't obviously wrong, but subtly mess up the security. With gcm the only real constraint to be mindful of is "don't reuse an iv".

Though that sounds like it might be a little bit of an issue, since a 32-bit iv only has 2^32 possible values, which you've indicated isn't enough. You might need to come up with some other scheme for ivs. If your devices have reasonably synchronized clocks, then you could use the date as part of the iv - that way your iv bytes in the data stream only need to be unique within a given day. If you make the iv bytes also be the timestamp within the day on the sending side, the receiving side can then ensure that it's not being replayed an old stream.

Note that with gcm, you don't actually include the counter in the data stream, so with this setup you have 96 bits for the authentication tag, which isn't perfect but doesn't have the issues that a 64-bit tag has.

Jabor fucked around with this message at 19:47 on Jul 8, 2020

OtspIII
Sep 22, 2002

Okay, I've got some follow-up misc questions I've built up over the weeks of working on this that I haven't been able to google my way out of. Here's my first:

In React, is there any way to force a URL change in response to a promise being resolved? I mostly navigate between pages via ReactRouter Links, but sometimes I need to wait for a DB call to go through before I trigger the page change. As an example: I hit a button that creates a new page, and would like to be taken right to the page to start setting it up. The issue is, I can't navigate to the page until its data has first been inserted into the DB.

I've been able to make handmade Links with useHistory(), but since that needs to be in a function-component I can't really trigger it from the promise either. I feel like this should be easy to do, but I can't get it to work and any googling I do just takes me to the basic useHistory() usage.

fankey
Aug 31, 2001

snip

fankey fucked around with this message at 15:38 on Aug 26, 2020

dupersaurus
Aug 1, 2012

Futurism was an art movement where dudes were all 'CARS ARE COOL AND THE PAST IS FOR CHUMPS. LET'S DRAW SOME CARS.'

OtspIII posted:

Okay, I've got some follow-up misc questions I've built up over the weeks of working on this that I haven't been able to google my way out of. Here's my first:

In React, is there any way to force a URL change in response to a promise being resolved? I mostly navigate between pages via ReactRouter Links, but sometimes I need to wait for a DB call to go through before I trigger the page change. As an example: I hit a button that creates a new page, and would like to be taken right to the page to start setting it up. The issue is, I can't navigate to the page until its data has first been inserted into the DB.

I've been able to make handmade Links with useHistory(), but since that needs to be in a function-component I can't really trigger it from the promise either. I feel like this should be easy to do, but I can't get it to work and any googling I do just takes me to the basic useHistory() usage.

You can set it up in many ways, but the gist is:

code:
function ComponentThatCanUseHistory() {
	function clickMe() {
		returnsAPromise().then(() => useHistory(newPage))
	}

	return <button onClick={clickMe} />
}

OtspIII
Sep 22, 2002

I've definitely been overcomplicating it--that makes perfect sense. Thanks!

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

Who is your opponent in this scenario? My understanding is that information can be gleaned by seeing when blocks are transmitted, so if your blocks aren't especially large (and that would affect the latency which you want to avoid) your opponent can basically hear a 1 or 0 value of the line. As such, you need to encrypt the silence as well unless you don't especially care and your opponent is not spending Real Money on this.

Edit: this et al.

Volmarias fucked around with this message at 04:21 on Sep 6, 2020

fankey
Aug 31, 2001

snip

fankey fucked around with this message at 15:39 on Aug 26, 2020

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...

fankey posted:

It's 48k uncompressed audio with no silence suppression. Using ECB mode could leak silence but I think CTR or other modes defeats that.

My point is less what the content of your packets are, and more of when they are sent.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Volmarias posted:

My point is less what the content of your packets are, and more of when they are sent.

If it's 48kHz uncompressed audio, you get a uniform stream of 48 thousand samples per second regardless of the audio input.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...
Whoops, missed the "no silence suppression" part, just caught it on the reread. Sorry!

fankey
Aug 31, 2001

I think I'll just use this instead.

quote:

combines four different techniques for audio encryption in the same scheme which makes it more secure: self-adaptive scrambling, multi chaotic maps, dynamic DNA encoding and cipher feedback encryption. Also, it introduces two new designed multi chaotic maps as pseudo random generators that combine five different chaotic maps with eight control parameters. The scheme consists of three phases with three secret keys. The first phase is a self-adaptive bit scrambling where SHA512 of the input audio is computed to be used as a first secret key for cyclic shifting the input audio binary stream which efficiently reduces the strong correlation between neighboring audio samples. The second phase is a dynamic DNA encoding of the scrambled audio...
Seems simple and easy to understand.

fritz
Jul 26, 2003

fankey posted:

I'm trying to come up with a scheme to encrypt a stream 24bit pcm audio without adding any additional data to the stream. Assuming that both sides agree on the AES key through some other mechanism is there a way to accomplish this? A few possible gotchas
  • no additional bandwidth or data can be transmitted
  • no handshake is available to know when the data started. It just shows up in a buffer
  • the transport is unreliable and audio may be missing for multiple samples ( most likely multiples of 48 ) in a row. In this case they will be 0s
  • no outside mechanism is available to synchronize the aes block
  • playing back bad audio is not acceptable. If the audio is missing 0's should be played back

Put 5 24-bit samples in a single 128-bit buffer, fill the other 8 bits with 0, encrypt, send, then the receiver just drops those 8 bits.

fankey
Aug 31, 2001

fritz posted:

Put 5 24-bit samples in a single 128-bit buffer, fill the other 8 bits with 0, encrypt, send, then the receiver just drops those 8 bits.
It's not clear how this is supposed to solve most if any of the requirements I listed.

How is the AES block synchronized with the data stream? If ECB isn't being used how is the additional information being transmitted? How is bad data detected? Are the 5 samples just 5 samples from the original stream? If so are you dropping the 6th sample? If not are you downsampling to 40k?

infinite99
Aug 9, 2006

ANY OF YALLS DICKS HARD??
I'm trying to deal with some OCR output from images of tables and I'd like to be able to extract certain things out of these tables. I'm using the Azure service to do the OCR which gives me it's predicted text and bounding boxes for anything it scrapes out. So far the output is actually pretty good and I'm pretty impressed with it.

My issue now is actually getting the relevant parts of the output and throwing those into a structured database. It looks like the OCR reads things row by row and the JSON that it returns seems to reflect that so that's kind of helpful but without knowing what cells the information is sitting in, I can't reliably scrape out the information. The tables aren't always consistent either so that's another issue. I've done some image manipulation using OpenCV to figure out the cells of the table which gets me a bounding box and I can check where each portion of the OCR output falls within a box but it's not very consistent. Even trying to figure out the table headings is kind of an issue since they can exist at the top of the table or the bottom of the table. Any skewing in the image also messes up my calculations for the table as well but that might be something I can fix.

Here's some examples of what I'm working with:





The information I want to grab out is the Rev number, Date, and Description.

I feel like I'm way over-complicating this problem but I've been stuck for days trying to figure out a good solution to this so any help to point me in the right direction would be super appreciated!

CarForumPoster
Jun 26, 2013

⚡POWER⚡
Is there a database/big data thread somewhere?




CarForumPoster fucked around with this message at 13:35 on Jul 16, 2020

CarForumPoster
Jun 26, 2013

⚡POWER⚡

infinite99 posted:

I'm trying to deal with some OCR output from images of tables and I'd like to be able to extract certain things out of these tables. I'm using the Azure service to do the OCR which gives me it's predicted text and bounding boxes for anything it scrapes out. So far the output is actually pretty good and I'm pretty impressed with it.

My issue now is actually getting the relevant parts of the output and throwing those into a structured database. It looks like the OCR reads things row by row and the JSON that it returns seems to reflect that so that's kind of helpful but without knowing what cells the information is sitting in, I can't reliably scrape out the information. The tables aren't always consistent either so that's another issue. I've done some image manipulation using OpenCV to figure out the cells of the table which gets me a bounding box and I can check where each portion of the OCR output falls within a box but it's not very consistent. Even trying to figure out the table headings is kind of an issue since they can exist at the top of the table or the bottom of the table. Any skewing in the image also messes up my calculations for the table as well but that might be something I can fix.

Here's some examples of what I'm working with:





The information I want to grab out is the Rev number, Date, and Description.

I feel like I'm way over-complicating this problem but I've been stuck for days trying to figure out a good solution to this so any help to point me in the right direction would be super appreciated!

hahaha I've done this exact task before with a mix of AutoCAD files, mylars, scanned drawings, etc. in the days before useful OCR. It loving sucked. Best of luck.

Here's how I'd go about it:
Scenario 1) If I had to do ~2000 drawings and they were in a security environment that this would be acceptable, I might have OpenCV auto crop the revision block so youre not sending IP to who knows where. Save the cropped file with a relevant file name, then I'd just get someone on Amazon Mturk/fiverr/upwork to do it manually for like $50-$100. As an engineer I'm getting paid to deliver results, not write code. Save the results in a google sheet if its an upworker or if its mturk just parse the resulting CSV. 2000 drawings would get done in prob 1 day using mturk because you have a large numebr of people doing them in parallel HOWEVER you would want to ALSO have people audit the results. Maybe some highly rated mturk people, maybe you, maybe an upworker. For quick and dirty projects in the past, I've just made a little HTML page with radio buttons where I glance at the work side by side to approve or disapprove it. I then have any disapproved ones automatically go back to mturk for reprocessing.

Mturk quality aint great but for a task like this is VERY easy to set up and parallelize. If you do the mturk route, a good rule of thumb is >1000 completed HITs, english speaking country required (Make a list. If you want to cheap, include Bangladesh, Philippines, etc.) , >98% approval on HITs. You should time yourself doing 10ish of the HITs yourself and try to budget about ~$6-7/hr.

If I had >>2,000 drawings to do or a security environment where they'd say gently caress you to the above idea, I'd try to use Open CV to find where to crop, I might try to manipulate the image to make any detected lines straight since on scan they might've gotten skewed, feed the cropped and corrected image to an OCR service like Azure or Amazon textract. Give textract a try too, they advertise table extraction but I've not had steller luck with it.

If, for some reason, I can't make Open CV can't reliably find the rev block to bound and crop it, I might try to do an image recognition based approach where I use a NN to find the rev block then I crop the image based ont he outputs of the NN.

Last option is to hire a few people overseas to do it with no cropping and make them sign NDAs.

CarForumPoster fucked around with this message at 13:49 on Jul 16, 2020

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



CarForumPoster posted:

Is there a database/big data thread somewhere?

https://forums.somethingawful.com/showthread.php?threadid=2672629

infinite99
Aug 9, 2006

ANY OF YALLS DICKS HARD??

CarForumPoster posted:

hahaha I've done this exact task before with a mix of AutoCAD files, mylars, scanned drawings, etc. in the days before useful OCR. It loving sucked. Best of luck.

Here's how I'd go about it:
Scenario 1) If I had to do ~2000 drawings and they were in a security environment that this would be acceptable, I might have OpenCV auto crop the revision block so youre not sending IP to who knows where. Save the cropped file with a relevant file name, then I'd just get someone on Amazon Mturk/fiverr/upwork to do it manually for like $50-$100. As an engineer I'm getting paid to deliver results, not write code. Save the results in a google sheet if its an upworker or if its mturk just parse the resulting CSV. 2000 drawings would get done in prob 1 day using mturk because you have a large numebr of people doing them in parallel HOWEVER you would want to ALSO have people audit the results. Maybe some highly rated mturk people, maybe you, maybe an upworker. For quick and dirty projects in the past, I've just made a little HTML page with radio buttons where I glance at the work side by side to approve or disapprove it. I then have any disapproved ones automatically go back to mturk for reprocessing.

Mturk quality aint great but for a task like this is VERY easy to set up and parallelize. If you do the mturk route, a good rule of thumb is >1000 completed HITs, english speaking country required (Make a list. If you want to cheap, include Bangladesh, Philippines, etc.) , >98% approval on HITs. You should time yourself doing 10ish of the HITs yourself and try to budget about ~$6-7/hr.

If I had >>2,000 drawings to do or a security environment where they'd say gently caress you to the above idea, I'd try to use Open CV to find where to crop, I might try to manipulate the image to make any detected lines straight since on scan they might've gotten skewed, feed the cropped and corrected image to an OCR service like Azure or Amazon textract. Give textract a try too, they advertise table extraction but I've not had steller luck with it.

If, for some reason, I can't make Open CV can't reliably find the rev block to bound and crop it, I might try to do an image recognition based approach where I use a NN to find the rev block then I crop the image based ont he outputs of the NN.

Last option is to hire a few people overseas to do it with no cropping and make them sign NDAs.

Haha I'm glad you know what I'm going through at the moment. Unfortunately I need to be able to scale this up quite a bit for migration purposes. MTurk is a cool idea though and I hadn't thought about that!

It's good to know I'm on the right track at least. I have object detection set up for the rev block to crop it out which is working fairly well. I'm not required to get 100% accuracy but it still needs to be very high. So things like scans that even a human can barely read aren't in scope. Since like 90% of things are CAD generated it's pretty easy to get accurate OCR results but I'm just stuck on pulling this information out in a meaningful way haha.

I guess I just need to work on the process of figuring out the cells of the table through image manipulation to be able to build the table in memory and insert the text based on coordinates.

Thanks for the suggestions though!

fuckwolf
Oct 2, 2014

by Pragmatica
Looking for some feedback to make sure I have my bearings. I don't have much of a programming background (a bit of HTML in middle school, perhaps you saw my Metallica fan page on Anglefire, and some BASIC in high school), but I think what I'm trying to do should is within my reach with some learning and gumption. Gumption a plenty.

I'm trying to put together a website that will allow registered users to easily add, edit, sort, and export data sets. Right now, all of the record keeping takes place in a bunch of different Excel spreadsheets, so the basic functionality would replace those with something more cohesive that is easier for users to access. Here's a shortlist of other features I'd like to implement: create an e-mail notification when data falls outside of a certain range, display charts based on the selected data, and generate custom reports such as "Here's the trend for the last 30 days of brand X."

Right now I'm working on refreshing my understanding of HTML and CSS. That's been easy, I just don't know where to go from here. I think Javascript and JQuery to handle the interactive elements? PHP and SQL to handle adding to and retrieving from the database? What about Ruby? What do I need to know to generate e-mail notifications? What about username and login stuff? Just looking for some direction and a sanity check here. Does this all sound possible for someone who doesn't know what they're doing? Any general tips?

Dominoes
Sep 20, 2007

Don't learn JQuery. You need to learn to program for web servers, which is most easily done with a backend framework. PHP and node are popular examples, but they're both a mess. I recommend starting with a batteries-included framework like Python/Django, or Ruby/Rails. Another approach is using a minimal framework, like FastApi or Flask for Python, or one of the ones for Go or Rust. You'll have to write more manual code for solved problems this way, or use addons. Eg for the email notifications, login you mention.

You can complete your objective without using SQL, since the frameworks include high-level tools (called ORMs) that execute the SQL for you. That said, I recommend learning it, as it's universal, and will protect you from getting stuck in a particular framework, and its provincial DB dialect.


TLDR: Follow this tutorial to completion.

Dominoes fucked around with this message at 23:49 on Jul 16, 2020

CarForumPoster
Jun 26, 2013

⚡POWER⚡

infinite99 posted:

I guess I just need to work on the process of figuring out the cells of the table through image manipulation to be able to build the table in memory and insert the text based on coordinates.

Thanks for the suggestions though!

If you need real scale of largely templated drawings then may I suggest: https://scale.com/document

This is a real problem thats worth a lot of money, so I wouldn't count on being able to roll your own to >>90% accuracy. That said I have never had great luck with OpenCV unless I have a fairly consistent image dataset.

infinite99
Aug 9, 2006

ANY OF YALLS DICKS HARD??

CarForumPoster posted:

If you need real scale of largely templated drawings then may I suggest: https://scale.com/document

This is a real problem thats worth a lot of money, so I wouldn't count on being able to roll your own to >>90% accuracy. That said I have never had great luck with OpenCV unless I have a fairly consistent image dataset.

Thanks for the link! From the sounds of it, they have the same approach that I'm doing and I'm sure it's be a lot better. It might be something I'll have to look in to if things don't go well.

My object detection method has been working quite solid thankfully. I have a ton of documents to take examples from and train my model on it which actually works which is surprising to me. Deep learning and Neural Networks all seem like magic to me but I'm slowly grasping how things work. I played around some more with OpenCV and worked out some bugs in my table detection/creation method and it seems to work pretty well so far. I think the majority of documents that I'll be working with are going to be generated PDFs or at least documents that should be fairly readable. I think eventually I'll have to move to python for all of this since all the machine learning stuff is done in that language or at least it seems easier to work with on there as opposed to C#.

Hadlock
Nov 9, 2004

Dominoes posted:

Don't learn JQuery. You need to learn to program for web servers, which is most easily done with a backend framework. PHP and node are popular examples, but they're both a mess. I recommend starting with a batteries-included framework like Python/Django, or Ruby/Rails. Another approach is using a minimal framework, like FastApi or Flask for Python, or one of the ones for Go or Rust. You'll have to write more manual code for solved problems this way, or use addons. Eg for the email notifications, login you mention.

You can complete your objective without using SQL, since the frameworks include high-level tools (called ORMs) that execute the SQL for you. That said, I recommend learning it, as it's universal, and will protect you from getting stuck in a particular framework, and its provincial DB dialect.


TLDR: Follow this tutorial to completion.

This is a good one: https://tutorial.djangogirls.org/en/

Walks you through how to build a blog

Oh, bummer, rails for zombies is down. Personally I like the djangogirls blog example, it's really straightforward, but rails for zombies is a sort of gamified way of building a twitter clone, but for zombies.

You could write something in Flask, but I think Django is a better option for something like this. You should be able to build the functional djangogirls blog app and then modify it to meet your needs. Personally I think there's just slightly too much magic in ruby on rails with hidden symlinks etc that cause weird unintended side effects. Django/python is a nice mix of batteries included, and "oh yeah, this thing goes here, ok, cool".

fuckwolf
Oct 2, 2014

by Pragmatica
Thank you both. After reading a bit more about Python and Django, it sounds like the right direction. I’ve started working through the Google Python course as well as dipping my toes in the DjangoGirls material. So cool that there are so many free resources for learning this stuff. Exciting!

CarForumPoster
Jun 26, 2013

⚡POWER⚡

fuckwolf posted:

Thank you both. After reading a bit more about Python and Django, it sounds like the right direction. I’ve started working through the Google Python course as well as dipping my toes in the DjangoGirls material. So cool that there are so many free resources for learning this stuff. Exciting!

You sound much like me 2 years ago. I agree that python is the right language. I use both Django and Dash, my suggestion below, regularly and have active web apps currently deployed in both. I definitely have less experiences than Dominoes though.

When you get to the part where you want to actually deploy your Django app to be a real live web app for people to use on the internet, I highly recommend not learning about all the server config that tends to come with deploying python web apps to servers and instead taking a look at Zappa which makes using AWS' serverless architecture quite easy. EXCEPT if you decide to use Flask/Dash in which case Heroku makes it really really really easy to get a Flask/Dash web app up and working.

fuckwolf posted:

a website ... registered users ... easily add, edit, sort, and export data sets. ... replace Excel sheets ... features: create an e-mail notification when data falls outside of a certain range, display charts based on the selected data, and generate custom reports such as "Here's the trend for the last 30 days of brand X."

Except for the authentication of users, what you're describing is basically the perfect use case for the python based Dash

You can make a working, deployable-to-Heroku app, with as little as the following block of code (3 files):
code:
# file 1 - Procfile: This is your server spec.
web: gunicorn app:server

# file 2 - requirements.txt: This is your list of stuff to have heroku pip install
dash_bootstrap_compoents
pandas #probably

# file 3 - app.py: This is your actual code. Below is all you need to have pretty bootstrap stuff available on a bare bones site.
import dash
import dash_bootstrap_components as dbc

app = dash.Dash(
    external_stylesheets=[dbc.themes.BOOTSTRAP]
)

app.layout = dbc.Alert(
    "Hello, Bootstrap!" className="m-5"
)

if __name__ == "__main__":
    app.run_server()
The most compelling reason to use Dash instead of Django is the sever config is generally easier (IMO) and the CSS/HTML bits almost completely disappear. Whole thing is done in python and made pretty by dash-bootstrap-components, you learn one language, Python, and be done with it.

For comparison sake, a similar thing in Django would usually require defining and provisioning a database, creating HTML template and would be 6+ files that dont deploy successfully. That said, I love Django for 3 reasons: User authentication is good and baked in from the start, almost any problem you have with it has 4+ stack overflow articles about it, django rest framework makes it really easy to slap a REST API on top of an existing app. Django's learning curve is steeper than Dash if youre just starting out, but there's real rewards to it if you have those use cases

CarForumPoster fucked around with this message at 14:20 on Jul 18, 2020

CarForumPoster
Jun 26, 2013

⚡POWER⚡

fuckwolf posted:

refreshing my understanding of HTML and CSS. [...] Javascript and JQuery [...] Ruby [...] PHP and SQL

See reasons above but IMO you should not learn any of the things you suggested. You don't need to know SQL, CSS, JS, you need to google it every now and then. Example below.

quote:

1- handle adding to and retrieving from the database?
2- What do I need to know to generate e-mail notifications?
3- What about username and login stuff?
4- Does this all sound possible for someone who doesn't know what they're doing? Any general tips?

All of this is covered by a one page Dash app except the database. I'd suggest using Amazon RDS.

1- For reading/writing an excel-like table from a sqlite db into python, whether django or dash, you can just do this:
code:
conn = sqlite3.connect(database_path)
df = pd.read_sql("SELECT * FROM %s" % table_name, conn)
Put those two lines in your dash app, do whatever calculations, return the result. That easy.

2- Everyone uses APIs for service based stuff now. Doesnt matter the language. You should use an email api like mailgun is the strict answer. That said, its hilariously easy to send IMs via webhooks to MS teams or slack, which my team greatly prefers. For emails in python via an API like mailgun itd be this. Yes, really that easy. One line.
code:
requests.post(
        "https://api.mailgun.net/v3/samples.mailgun.org/messages",
        auth=("api", "key-3ax6xnjp"),
        data={"from": "Excited User <excited@samples.mailgun.org>",
            "to": ["devs@mailgun.net"],
            "subject": "Hello",
            "text": "Testing some Mailgun awesomeness!"})
3- How many users. Django has the best baked in auth. Dash has simple backed in auth.
4- Yes, its much easier than you're thiking. You just need to get started and python is prob the way to do that.

CarForumPoster fucked around with this message at 14:41 on Jul 18, 2020

fuckwolf
Oct 2, 2014

by Pragmatica
It will be in the neighborhood of 50 users. I’m planning on using email rather than an IM based solution because many of the users don’t work at desks. Almost all have email access via their phones, though. I don’t think very many would be interested in installing some additional app on their phones since the company doesn’t pay for their devices.

As for the rest, I know so little right now that I can’t make heads or tails of it, but hopefully that will change quickly as I dive in. I really appreciate the thoughtful responses and encouragement. I’ll definitely post an update as it gets more fleshed out (hopefully) or some more questions when I’m lost (more likely).

Walh Hara
May 11, 2012
An unpopular opinion: while Python Dash is pretty cool, R Shiny is superior in every way except that it's written in R. But if you have to learn a language anyway and the app is very basic then in my opinion it's both easier to use and has a much nicer end result.

https://shiny.rstudio.com/

Xeom
Mar 16, 2007
Is it really worth hashing integers for a hash table if the size of all your tables are prime?
You can't distribute the set of numbers any more evenly. They already distribute perfectly when n % p.

I guess you could get some bad behavior if all your keys have the same divisor, but its not clear to me that hashing the keys wont stop you from making issues in another part of that set of numbers.

Is there any good information on why you should hash integral types?

Adbot
ADBOT LOVES YOU

Look Around You
Jan 19, 2009

Xeom posted:

Is it really worth hashing integers for a hash table if the size of all your tables are prime?
You can't distribute the set of numbers any more evenly. They already distribute perfectly when n % p.

I guess you could get some bad behavior if all your keys have the same divisor, but its not clear to me that hashing the keys wont stop you from making issues in another part of that set of numbers.

Is there any good information on why you should hash integral types?

It's been a bit since I've had theory but i believe that hashing should be more uniform in the case of multiple keys with similar divisors.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply