If you want a batteries-included, zero-config Django + React framework, check out https://www.reactivated.io .
It incorporates most of these best practices, along with React server side rendering in addition to regular Django templates.
Full disclosure: I'm the creator.
If you're doing server-side rendering, why do it in React SSR instead of Django templates?
I've seen this pattern a lot recently but haven't figured out what the extra complexity gets you
You can use the same templates on the backend and frontend. That may not matter to you, at least not now, but it's a great reason in many contexts.
Personally I find JSX to be a terrific templating system, much better than anything like Django's, but if you're familiar with both and prefer Django's, this benefit does not apply to you!
I haven't used this particular framework, but having type safety in React through writing TypeScript, generating bindings to the backend so those are typed as well.
There's also a _ton_ of preexisting React components you can generally drop in and use, which is less true these days with something like a Django template.
You also have the option of doing SSR and then doing more dynamic stuff on the client side, as a sort of optimization (and plain better user experience than making _more_ server requests to get the initial page state loaded).
Thanks for taking the time to reply, but I'm still not really understanding the benefit.
Type safety is incredibly important if you're lots of logic in a language, which is why TS is great for SPAs. Does the type safety of TS get you anything if you're just doing SSR with not a whole lot of logic in TS? All of your application logic will be on the Python side of things.
>You also have the option of doing SSR and then doing more dynamic stuff on the client side
Isn't this just the old-school way of doing things before SPAs came around? i.e you render the page on the server and then add dynamics features using JS. I think the new way of doing this is with htmx, hotwire, etc.
I think the biggest benefit of SSR is you get the first page fully rendered out (good for SEO), and beyond that page the react SPA takes over by doing all the cool client side stuff like routing and what-not.
The biggest benefit for SSR in my opinion is SEO + first load is fast because its already generated. After that though, its just the plain old SPA experience.
this is cool, thanks for sharing. this (https://www.reactivated.io/documentation/philosophy-goals/) fits my bias so i'm def gonna try it. curious how it works at a low level, too
Same, I agree with many of the thoughts there. When you get a moment, would you mind writing a guide on how to deploy it to Render.com? It sounds like the Heroku of 2022.
This looks great!
If you do this a lot you should look into ‘startapp —-template’ which lets you bake in boilerplate like this.
One minor point on app names. They are really hard/annoying to change once you have production data, because your DB tables will always be appname_modelname, and renaming tables is a real PITA. You can set the model to point to a nonstandard table, but then your project has a confusing non-Djangonic wart that will annoy you forever more. For this reason I think ‘core’ is a safer option, unless you are certain what your app should be named. Naming after one domain model is likely to be too specific. (I strongly recommend against multiple apps until you have functionality you need to commonize between projects, as refactoring models between apps is a nightmare if you get the boundaries wrong.)
> For this reason I think ‘core’ is a safer option, unless you are certain what your app should be named.
Yep, this is what I do. It’s rare that I make anything that can be shared cross-app (most Django apps I make are super simple CRUD-types), so every one of my Django apps has a ‘core’. I also recommend this approach.
Regarding apps, I was just thinking you could create a directory structure like this, essentially eschewing the whole concept of Django "apps":
You still need to register the top level package as an app for Django to find things, but then you don't have to deal with Django's concept of apps beyond that. All your tables will be named <package>_<model> by default, which seems nice.
<project> src/ <package>/ models/ __init__.py # Import all model classes (necessary for Django to find them) some_model.py # Define SomeModel class views/ __init__.py some_view.py # Views for SomeModel settings.py # Add "<package>" to INSTALLED_APPS urls.py
If it turns out later you need resusable "apps" for some reason, you could always add an apps/ subpackage for them.
I haven't tried this in a real project, so I don't know if there are any downsides, but it seems like a decent approach.
Yeah I've played around with this approach in the past while prototyping service boilerplates, I think it's viable. I never got it polished enough to publish as a startproject --template and ultimately went with Flask for microservices, so I didn't finish the prototype.
I found this article useful while I was experimenting: https://simpleisbetterthancomplex.com/article/2017/08/07/a-m...
There are a few different places where little things break and then you need to use an obscure settings.py variable to fix them, which makes me a little nervous since it will cause a little cognitive friction for Django-fluent developers joining your project. But I do think it's worth experimenting with.
I would add to this list: convert your template engine to Jinja2. I held out for a long time on this, and now I would not go back. The purist approach ("keep logic out of templates") sounds good, but in practice leads to spaghetti hacks with weird additional template tags to do simple things, a bunch of additional context variables with no meaning, etc.
> purist approach ("keep logic out of templates") sounds good, but in practice leads to spaghetti hacks
This. Exactly this
Go for Jinja2 and don't look back
If there's one defect in Django is the rosy-colored purist view that permeates a lot of its design decisions.
Keeping "logic only in code" made sense in the 90s. And no, I'm not building a new template tag every time I make some stylistic choice.
As another hold-out on Jinja, what pushed you over the edge? Half of the appeal to me in Django is that if you follow the easy path, one project should look like another. Breaking conventions for a few template niceties has not seemed worth it (although, I am not a front end guy, and survive on very barebones presentation).
> what pushed you over the edge?
Using Tailwind without a frontend js framework. I needed a way to create lightweight “components” ala React but on the backend, because otherwise you’re either copy pasting classes like mad or (ab)using tailwind’s @apply.
I do pretty much the same, which makes me think it should be django default startup template.
I would also do:
- install django_extensions and django_debug_toolbar
- rename the admin URL route to contain a uuid
- setup black, mypy, pylint, ipdb, ipython, dotenv and doit (I do so for all my python projects)
- manage.py createsuperuser
Also, very often :
- setup celery (locally with fs backend)
- setup vitejs
- install sentry and mailtrap plugins
- install django-allauth
- install DRF
And currently exploring:
- replace DRF with django ninja
Straightforward, useful advice. I pretty much do all of these when I start new Django projects.
The need for a custom User model makes me a little sad every time because, well, single `name` fields (rather than `first_name`, `last_name`) and email-for-username do feel to me like more sensible defaults circa 2022.
I'm curious as to why you think a single name field is better than the traditional separate first and last?
Per https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-... , names have far too much ambiguity to consider that two fields is necessary or sufficient to encapsulate that information.
Maybe for consumer apps, but for a lot of enterprise apps that have to be compatible with existing APIs or domain-specific data standard that require two name fields, that’s not really a viable approach.
7. completely rewrite the auth system because it's not sufficiently flexible, chafe at the boilerplate, then port the backend to a different framework + miss the admin system a little
Django-allauth usually got you covered, but in the rare case it doesn't, creating a micro service just for auth is a way better alternative than a complete rewrite.
I’ve never really understood the concept of an ‘app’, in some projects I’m sure in makes sense but it not being transparent by default in a new project seems silly.
Am I missing something here?
The name suck, but the idea is they are plugguable: check the django packages site for an idea of the benefit of that. It's essentially a plugin system with autoloading of some resources, such as the model and templates.
Also, apps override each other in the order or import. E.g one app can override the template of another just by using the same name. This means you can plug AND extend.
I used to build everything with one giant “monoapp” and completely ignore that aspect of Django, which worked pretty well. Eventually though you need to split up your models into multiple files, and then your views, and admin, and so on, and it turns out there’s a lot of benefits of just segregating things into “apps” from the beginning. The only real downside I’ve noticed in practice is the omni-file-search feature in my IDE is harder to use, because you now have 7 “models.py” and it’s a bit harder to quickly navigate to the right one. If you split up files without using apps (just normal Python imports), you don't have the same problem with all the names being duplicated.
The pitch that you can "reuse" apps across projects always seemed weird to me, because that's basically never something you actually want to do/can do easily without major surgery.
I will sign onto this approach as well. Never expanded beyond a `core` app, which may have duplicated modules as the project grows (models_foo, models_bar, views_foo, views_bar, etc). I have never felt that any one component of the project was isolated enough from the rest of everything that it made sense to firewall off pieces into apps.
Say you want to easily create a comments app or ratings that you can associate with any other model in your project. Write that app once and use in many places. Or maybe something that sucks to write that may be error-prone, like auth, especially social auth which may change frequently. Maintaining one app and sharing it is way better than everyone trying to keep up with the changes themselves.
Or take a look at https://djangopackages.org/ for examples of apps people like to share.
Can't you just keep the logic for your comments app in the monoapp and use it with everything?
I understand if you want to distribute your app for others to incorporate, but otherwise it felt like the documentation overemphasizes the "app" aspect. It's a reasonable structure but more of a guideline... unless you want to reuse in other codebases
I think many (most?) Django apps only ever need one app. I think the fact that it's front and center in Django is a bit if a distraction that makes people think they _have_ to use multiple apps.
100% agree. It’s a distraction
I don't know why no one mentioned this.
I use a modified version of this for all my projects. This one comes with all the goodies
Yup, a custom cookiecutter template saves ton of time.
I created this one to create a production ready Django project in few minutes: https://github.com/Mittal-Analytics/django-cookiecutter
Things this does: - README.md: setup readme with development setup - Django split settings: split settings for local, testing and production - Split requirements: split requirements.txt for local and production - Pre-commit hooks: setup pre-commit hooks for black and pyflakes - django-envoiron: database config and secrets in environment - editorconfig: sensible tab/space defaults for html, js and python files - remote-setup: setup hosting on uberspace - git push deployment: `git push live` makes the changes live - github actions for tests: run tests automatically on Github
Really sensible, straightforward recommendations. Nothing flashy.
Honestly I think these should be defaults (or at least configurable options) of the project generator. I guess you could use cookiecutter, but it's not really worth it.
And also install Django Extensions (https://django-extensions.readthedocs.io/en/latest/)
It makes the Django CLI comparable to rails. Notably the reset_db and shell_plus command
Reasonable. Personally I just made a template project to get everything off the ground in one step, even the production environment and repository.
Every time whenever I start a new project.
1. django-environ 2. celery 3. channels 4. apps structure https://rajasimon.io/blog/django-project-structure/
To setup all the "Hello World" take long time.
I used Django, and personally I don't like Django startaproject command that not allow us to create project that contains dash "-". Also default appname has the same name with the project name can confusing the new comer in Django.
I always structures my project like this when start my Django project.
$ mkdir -p my-example-project/src
$ cd my-example-project/src
$ django-admin startproject youtube_downloader .
so all Django related will be under src folder.
That's actually a Python convention issue rather than a Django issue. Python convention dictates preference for underscores. This is because dashes cannot be used in a module name.
Yes, because default Django app name by default using the same name with Django project name.
How often does one start a new Django project? I've been working on the same project for a couple years...
5 times a year if you are a freelancer.
I started dozens of rails projects when doing client work. We occasionally chose Django, too.
all great bits of advice except the first.
keep your secrets out of your environment! especially when you aren't providing any additional safety advice like making sure DEBUG is off
Environment variables exist only in memory and can't be read by any user other than the process's own and root. There's nothing wrong with keeping secrets there.
Depends of the size of your setup. Most website are small, and their threat model don't require more than having the secrets in a systemd file env statement.
After all, if you have a single server, and the attacker can read a root protected file oe the spawned processes context, you are pwned already. As for exposing env var, popping os.environ is usually enough.
No need to pay for more than you must: bots and script kiddies are not mossad.
They both recommend mounting them into a tmpfs, and then unmounting, but I've not yet tried this.
I've heard this advice a number of times, but often run up against otherwise standard looking systems that rely on secrets in environment variables -- mainly thinking about AWS's requirement when using Secrets Manager with ECS; the secrets are stored securely, but ultimately loaded into a containers environment.
Where should you put secrets instead? Obviously not the filesystem.
Env vars in systemd or supervisor files are fine for small server projects. But make sure you pop them from the os.environ dict once you read them to avoid accidental exposure.
For the desktop, use the keyring module.
If you start to scale up your threat model, you should use a vault, but the setup is way more costly, and tricky to get the reboot story right (hence the priviledged first requests comments).
Theoretically you could write a process that gets its secret from the first request, which is treated as privileged. Then the app can have it in memory, and it's only as vulnerable as the app itself. (This is a kind of degenerate form of the "stem cell" pattern for processes, like Erlang's gen_server.) And yes, this kicks the can to another system, but presumably the secret has to be durable somewhere. Although perhaps a sufficiently sophisticated (or broken?) cluster the nodes could keep passing the secret to each other, never touching disk.
I’ve heard that tmpfs is a good solution. A script on boot loads the secrets from something like vault. Env can be exposed through docker inspect if you’re on a shared host. Not sure what other negatives to env there are though.
> Obviously not the filesystem.
How would you avoid that?
Even this article, which recommends keeping your secrets in environment variables, tells you to implement that by storing them in the filesystem. The advice isn't to avoid storing your secrets in the filesystem; it's to avoid storing them in version control.
It is apparently an exercise for the reader to figure out why it's better to read your secrets out of the environment, which read them from a file you provided, than to read them from your own file yourself.
Linux already stores plenty of secrets in files in /etc. Just do the same: the root protected init file for your app likely has a mechanism for passing an env var to the new process. Systemd and supervisor do. Then pop os.environ instead of reading it, and you are safe from bots and script kiddies, which is likely your threat model
If you use bitwarden - you can set them via bitwarden cli, eg:
And put that in a startup script (that can go in version control).
export DATABASE_URL="$(bw get password my_app/db_url/prod)"
This has the same problem that OP refers to with regards to DEBUG: It remains in the environ, and for example if you forgot to set DEBUG=False, Django will just dump the whole environ on the error page.
If you can call bitwarden from within the python code somehow, such as with subprocess, that would be better.
And what happens on reboot ? Then you get to hardware solutions, and a small website is not going to want that.
The first two can be covered by this helpful library. 
I found this a lot more interesting than the submission article: https://12factor.net/