I use my hand drawn sketches as the basis for digital art. I started by using ControlNet Lineart and was so impressed with how specific the inclusion of each drawn element is. I also like to generate img to 3D experiments with sketches, 2D renders and various personal images.
In my painting and drawing practice I had wanted abstraction to be an engagement of the viewer. Giving them an opportunity to add meaning to the work through their own enjoyment and contemplation. Overwhelmingly people would prefer to hear about my intent and then ask me questions about my practice.
I found that to be disappointing because I wanted to them to at least ask questions about the piece. Maybe I wasn't good at giving them a way to connect to the art and therefore they would default to asking 'how often I paint, when did I start painting, how did I find the time to paint" etc. Back in the studio these interactions convinced me that I was on my own pursuit to stay creatively fit and active, but not the kind of artist people would be curious about.
These days I have noticed that when I show people my AI creations or the sketches behind them they ask me about "the ethics of AI, if art is in danger and what I intended to do with them."
The other observation I have had is adults typically flip through the creations quickly. Kids look at a few, ask as few questions about what they are, and then tell me about their own art and things they make with friends. Gotta love kids! Here are a few recent ideas.
Love the subtle retro style of this sculptural piece! Took a bit of fine tuning the style but one of the more satisfying things I have created recently.
I am a shy artist. Primarily because I don't get motivation from sharing art publicly. I see so much new art daily online that once I begin thinking about where I fit in the mental fatigue becomes counter productive for me.
Recently I shared an album of hundreds of creations with a friend (and singular art fan) and he asked some questions that I felt were interesting enough to create this post on my process and what it teaches me vs what I am seeking.
Specifically I have learned to take ink drawings and create renderings that reveal my actual intention. My digital art goal is to recreate natural details into characters and landscapes that are imagined and deal with my affection for abstraction, deconstruction and humor.
My drawing goals are to be humorous and crafty about how things can be rendered just slightly incorrect to make the viewer see something familiar and recognizable even when its nonsense.
My process is using hysts/ControlNet-v1-1 with Lineart, 50 steps, 14 guidance scale and I give minimal descriptions that are often plain. Example "Really real old dog, plant, and another old dog, with an alligator turtle, posing for a photography portrait".
In the past few months I started taking the ControlNet render to multimodalart/flux-style-shaping and mashing up styles. Here I used a portrait of a Tortise and a dog laying next to each other on a reflective tile floor.
Last night, I took the Flux output and had it described using WillemVH/Image_To_Text_Description which was very accurate given the image.
The last step confirmed why I prefer using sketches to language. One, I am a visual artist therefore I have much better nuance with the drawings than with words. Two, my minds eye looks for the distorted. Three MOR FUN.
I continue to experiment with taking a sketch to full 3D model using just AI workflow. Yesterday I tried (and loved) VAST-AI/TripoSG
The results of the texture and accuracy of guessing the rest of a random object are a marvel. This is an insect meets crab character I first created in 2D and the short video attached show the results! Great work!
When I start my daily drawings the prompt is not my focus as much as moving quickly and waking up my brain and connecting loose ideas. Here I started with some ovals and eventually was thinking of cells, tissue, and micro level shapes. I add some branching and artifacts to create more possibilities when I render it in Stable Diffusion.
After the first few images I start to use the previous render as the style image of the next render. Quick video sequence and some of my favorite selects below.
The handheld point and shoot digital camera is close to my heart cause my Dad always had his in the front pocket of his shirt. Love the lines, simplicity and compact greatness. Some fun mashups
Yesterday we published the first large evaluation of the new model, showing that it absolutely leaves the competition in the dust. We have now made the results and data available here! Please check it out and ❤️ !
CogVLM 2 Batch Processing App updated to support RTX 5000 series as well. I have compiled xFormers to make it work. Most Powerful Vision Model that can be used for image captioning. Now works with RTX 5000 series as well including older GPUs like 4000 3000 2000 series. Supports 4-bit quantization as well so it uses minimal amount of VRAM.