Editable Mesh
...
Usage Guide
AI Texturing Properties
22 min
umodeler x provides a wide range of settings for ai texturing, which are seamlessly integrated with the stable diffusion web ui through the api the ai texturing properties that can be viewed in umodeler x are primarily the most frequently used settings in the stable diffusion web ui this allows users to quickly change settings and see the results without having to go back to the web ui settings adjusted in the stable diffusion web ui are automatically reflected in umodeler x, even if the desired property is not available in umodeler x properties projectiononly this property generates a texture of the current state of the model without using ai texturing useful for previewing the camera state or current depthmap representation of the model using the various generate buttons before applying ai texturing checkpoint this setting selects the checkpoint to use for ai texturing address this is the address to access the stable diffusion web ui the default setting is http //127 0 0 1 7860/이며, and in most cases you will not need to change this value prompt this is a text field that specifies what you want to see when generating the texture negativeprompt a text field to specify what you want to exclude or modify from the texture that will be generated in general, it is recommended that you use the negative embedding to fill in this part of the text rather than writing it yourself below is a guide to setting up negative embedding negative embedding docid\ k9368nnioc4uushlygqiv cfgscale the cfg scale (classifier free guidance scale) is a property that controls how closely the generated image follows the prompt higher values of cfg scale generate images that are more faithful to the prompt, but at the expense of image quality conversely, a lower cfg scale value will result in an image that follows the prompt less closely, but the image quality is likely to be higher suggested values cfgscale we typically recommend setting it to a value between 7 and 9 sampler this setting determines the probabilistic method used to generate the texture each sampler handles the random parts of the texture differently, making a small difference in the final result simply put, the sampler is the setting that determines how randomness is handled during texture generation recommended sampling method sampler we mainly recommend euler a, dpm++ sde and dpm2 karras samplingstep this setting determines how much sampling is done during the texture generation process the higher the number of sampling steps, the higher quality textures can be generated, but the longer the texture generation time recommended values samplingstep generally, we recommend a samplingstep value of 20 to 25 restore faces this property is used when faces or eye parts are generated incorrectly we recommend enabling this when facial expressions are misrepresented, especially in realistic art styles imagemaxsize this property sets the size of the generated texture batchcount property that sets the total number of images to be generated in a row in a single generation pass use scenemap (img2img) layer as inpaint mask this property applies ai texturing only to areas that are painted with a brush on the paint layer when painting, it is recommended to use a color that is primarily the color you want to represent, or a color that is similar to the surrounding colors denoising strength this property determines how much the painted area is allowed to change in content closer to 0, there is no change, and closer to 1, the content changes significantly use depth control net a setting that enables the depth model of the controlnet the depth data represents points closer to the viewpoint in white and points farther away in black umodeler x's ai texturing converts the depth of the model into an image and applies it to the depth model based on the current scene view this setting can be utilized to generate more accurate textures that take into account the model's three dimensional shape controlnetmodel option to select the depth model to use there are multiple versions of the same model, so choose the one you want weight a slider that controls the influence of the depth model values closer to 0 have less influence, while higher values have more influence mode a property that sets the emphasis between the prompt and controlnet balanced the prompt and controlnet have equal importance my prompt is more important the prompt is more important controlnet is more important the impact of the depth model in the controlnet is more important neardistancemargin option to set the near distance margin fardistancemargin options to set the margin for the far distance extra control net1 this setting is enabled when you want to use an additional controlnet model in addition to the default controlnet model you can apply two or more controlnet models in parallel or in chains to get an ai texturing result that combines the best of each model hires fix a property that enables upscaling to quickly generate textures that are larger than the set size this feature allows you to generate large textures in a relatively short amount of time, but it can take too long to generate them upscaler this property selects the upscaling method used when upscaling the texture the following are recommended latent, scunet gan, r esrgan 4x, r esrgan 4x+animated 6b upscale by this property sets the scale factor to multiply the set texture size by denoising strength this property determines how much the texture generated during the upscaling process is allowed to change in content closer to 0, there will be no change, while closer to 1, the content of the texture will change significantly hires steps this property sets how many times the texture upscaling is repeated higher values will result in better quality, but may result in longer generation times seed a unique number that plays an important role in texture generation using the same prompt, settings, and seed value will always generate the same image the default value is 1 , which generates a randomized image each time shuffle button to set the seed value to random when click , the seed value is automatically set to 1 last seed this function allows you to retrieve the seed value of the most recently generated texture generate forever if checked to enable, this property will continue to generate textures until unchecked generate button to proceed with texture generation generate(rect) this button starts texture generation for a rectangular area created by click and drag in the scene custom cameras this property is used to generate textures based on pre placed cameras select the desired camera by click the slot next to the property, and that camera will be used as the basis r button to reset all set cameras + button to add a camera slot button to delete a camera slot the lowest camera will be deleted first generate (custom camera) this button generates textures in succession based on the number of cameras set up for example, if you have two cameras set up, the texture will be generated twice in a row generate (all in one) unlike the generate (custom camera) button , this button generates one texture at a time, taking into account the orientation of all set cameras if the cameras are positioned so that the image areas of the model overlap, the generated textures may also overlap, so be careful result an at a glance view of the generated textures and the various features associated with them rescan a button that brings up all of the textures generated for the currently active model rescan all this button loads all textures created within the current project in bulk apply texture this button applies the selected texture to the model and displays it on the screen restore saved camera transform this button moves and rotates the scene to the camera view when the selected resulting texture was generated prompthistory allows you to view the history of prompts used you can bring up the contents of the prompt and negativeprompt by click the apply button next to the desired prompt history