Skinning; Best Practices & Tools? (Newbie)

I'm something less than a newbie so I may not be asking this correctly... I am NOT a Java programmer.
This may be slightly off-topic for this forum, but I think this is the best group for this question.
I am looking for best practices and tools recommendations for skinning a Java app or game. I need to learn all I can about the steps, tools and potential pitfalls.
I've got great graphic designers who can make the GUI for me. Now I just need to know my steps for applying it and what I need to watch out for.
Thanks for any help; links, comments, titles, etc.

The Skin Look and Feel seems to be designed for desktop applications, and uses KDE and Gnome skins from X. This would be an awkward if not impossible way of skinning a game IMO. I'm interested in any other solutions out there.

Similar Messages

  • Skinning Best Practice

    I'm using JDeveloper 11g 11.1.1.2.0
    I've read both skinning documents on Building RIA and Webcenter with regards to skinning.
    But i'm wondering what is the best practice in an multi-application environment.
    To have a good picture of it, basically i have let's say 5 applications, and all of them uses the same skin. So what i did so far, is that i made a skin and have its images all packaged in a .jar file.
    This jar file, i add in EACH application in the WEB-INF\lib directory.
    My question is, is there a better way or what is the best way to handle skins in a multi-app scenario? Is the way i'm doing it, which is copy-pasting the .jar file in each of the project the right way? Or can there be a centralized directory for skinning?

    Certainly the later. We JAR our Skin as per the JDev Web Guide (http://download.oracle.com/docs/cd/E14571_01/web.1111/b31973/af_skin.htm#CHDBEDHI), then place it in a central directory. The JAR is then attached to each application's ViewController via Project Properties -> Libraries and Classpaths.
    Presumably a Maven deployment (we don't use Maven so I can't comment specifically) is also possible, but beyond scope of your question.
    CM.

  • Storage Server 2012 best practices? Newbie to larger storage systems.

    I have many years managing and planning smaller Windows server environments, however, my non-profit has recently purchased
    two StoreEasy 1630 servers and we would like to set them up using best practices for networking and Windows storage technologies. The main goal is to build an infrastructure so we can provide SMB/CIFS services across our campus network to our 500+ end user
    workstations, taking into account redundancy, backup and room for growth. The following describes our environment and vision. Any thoughts / guidance / white papers / directions would be appreciated.
    Networking
    The server closets all have Cisco 1000T switching equipment. What type of networking is desired/required? Do we
    need switch-hardware based LACP or will the Windows 2012 nic-teaming options be sufficient across the 4 1000T ports on the Storeasy?
    NAS Enclosures
    There are 2 StoreEasy 1630 Windows Storage servers. One in Brooklyn and the other in Manhattan.
    Hard Disk Configuration
    Each of the StoreEasy servers has 14 3TB drives for a total RAW storage capacity of 42TB. By default the StoreEasy
    servers were configured with 2 RAID 6 arrays with 1 hot standby disk in the first bay. One RAID 6 array is made up of disks 2-8 and is presenting two logical drives to the storage server. There is a 99.99GB OS partition and a 13872.32GB NTFS D: drive.The second
    RAID 6 Array resides on Disks 9-14 and is partitioned as one 11177.83 NTFS drive.  
    Storage Pooling
    In our deployment we would like to build in room for growth by implementing storage pooling that can be later
    increased in size when we add additional disk enclosures to the rack. Do we want to create VHDX files on top of the logical NTFS drives? When physical disk enclosures, with disks, are added to the rack and present a logical drive to the OS, would we just create
    additional VHDX files on the expansion enclosures and add them to the storage pool? If we do use VHDX virtual disks, what size virtual hard disks should we make? Is there a max capacity? 64TB? Please let us know what the best approach for storage pooling will
    be for our environment.
    Windows Sharing
    We were thinking that we would create a single Share granting all users within the AD FullOrganization User group
    read/write permission. Then within this share we were thinking of using NTFS permissioning to create subfolders with different permissions for each departmental group and subgroup. Is this the correct approach or do you suggest a different approach?
    DFS
    In order to provide high availability and redundancy we would like to use DFS replication on shared folders to
    mirror storage01, located in our Brooklyn server closet and storage02, located in our Manhattan server closet. Presently there is a 10TB DFS replication limit in Windows 2012. Is this replicaiton limit per share, or total of all files under DFS. We have been
    informed that HP will provide an upgrade to 2012 R2 Storage Server when it becomes available. In the meanwhile, how should we designing our storage and replication strategy around the limits?
    Backup Strategy
    I read that Windows Server backup can only backup disks up to 2TB in size. We were thinking that we would like
    our 2 current StoreEasy servers to backup to each other (to an unreplicated portion of the disk space) nightly until we can purchase a third system for backup. What is the best approach for backup? Should we use Windows Server Backup to be capturing the data
    volumes?
    Should we use a third party backup software?

    Hi,
    Sorry for the delay in reply.
    I'll try to reply each of your questions. However for the first one, you may have a try to post to Network forum for further information, or contact your device provider (HP) to see if there is any recommendation.
    For Storage Pooling:
    From your description you would like to create VHDX on RAID6 disks for increasment. It is fine and as you said it is 64TB limited. See:
    Hyper-V Virtual Hard Disk Format Overview
    http://technet.microsoft.com/en-us/library/hh831446.aspx
    Another possiable solution is using Storage Space - new function in Windows Server 2012. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    It could add hard disks to a storage pool and creating virtual disks from the pool. You can add disks later to this pool and creating new virtual disks if needed. 
    For Windows Sharing
    Generally we will have different sharing folders later. Creating all shares in a root folder sounds good but actually we may not able to accomplish. So it actually depends on actual environment.
    For DFS replication limitation
    I assume the 10TB limitation comes from this link:
    http://blogs.technet.com/b/csstwplatform/archive/2009/10/20/what-is-dfs-maximum-size-limit.aspx
    I contacted DFSR department about the limitation. Actually DFS-R could replicate more data which do not have an exact limitation. As you can see the article is created in 2009. 
    For Backup
    As you said there is a backup limitation (2GB - single backup). So if it cannot meet your requirement you will need to find a third party solution.
    Backup limitation
    http://technet.microsoft.com/en-us/library/cc772523.aspx
    If you have any feedback on our support, please send to [email protected]

  • E-business backup best practices

    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i

    user11969921 wrote:
    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i
    Please see previous threads for the same topic/discussion -- https://forums.oracle.com/search.jspa?view=content&resultTypes=&dateRange=all&q=backup+EBS&rankBy=relevance
    Please also see RMAN manuals (incremental backup, hot backup, ..etc) -- http://www.oracle.com/pls/db112/portal.portal_db?selected=4&frame=#backup_and_recovery
    Thanks,
    Hussein

  • SAP Best Practices - Error during the activation of Baseline Package

    Hello,
    I don´t know if this is the right forum to post this message, but I posting here as i didn´t get the info from the related forum, apologies for that.
    During the process of activation the Baseline Package 603V8 on ECC6.0 EHP3 (I´m importing all scenarios) from solution builder (txcode /n/smb/bbi)... it stopped and it shows me the following error:
    Start activation BC Set: /SMBA0/V_T001P_B0BN_J01
    Not activated - error
    End of activation BC Set: /SMBA0/V_T001P_B0BN_J01
    Can someone help me solve this problem?
    Thank you
    João Dimas - Portugal

    Hello again,
    I already solved this specific issue that I reported in my previous message, I did the manual activation (txcode SCPR20) of that BC Set: /SMBA0/V_T001P_B0BN_J01 guided by the document "SAP Best Practices Tools". After this activation I pressed the Change button that is displayed in the Old Status column and I changed the status to successful, after I chose the Activate to continue with the installation. But, once again, other error stopped the installation, now with other BC Set:
    /SMBA0/V_T001L_B175_J0A
    I tried to solve with the same method through the manual activation, but now it was not possible, when I do that in SCPR20 the activation logs show me four warnings (see the image "BC Sets: Activation logs")... and I don´t want to continue with activation/installation of the full scope without solve this issue or can I continue?!
    Can you help me please?
    Thank you,
    João Dimas - Portugal

  • I found warning after ran Best practice analyser Tools in exchange 2010

    Hello ,
    when ran Best practice Analyse tool i found some warining :
    1-DNS 'Host' Record Appears to Be Missing
    2-Active Server Pages is
    not installed
    3-Application log size
    4-Self –sign certificate found:
    is strongly recommended that you install an authority-signed or trusted certificate
    The SSL certificate for 'https://exchange.mydomain.com/Autodiscover/Autodiscover.xml' is self-signed. It does not provide any of the security guarantees provided
    by authority-signed or trusted certificates.(i have ssl certificate form geo cert Turst )  all users you can access mails form owa and they  can connect
    mailbox using outlook anywhere but with SSL warning.
    5-Single Global catalog in topology:
    There is only one global catalog server in the Directory Service Access (DSAccess) topology on server CADEXCHANGE. This configuration should be avoided for fault-tolerance
    reasons
    already checked the below links but i am not understood good  :
    http://technet.microsoft.com/en-us/library/6ec1c7f7-f878-43ae-bc52-6fea410742ae.aspx
    http://technet.microsoft.com/en-us/library/4fa708a1-a118-4953-8956-3c50399499f8.aspx
    http://technet.microsoft.com/en-us/library/8867bba7-7f81-42f9-96b6-2feb7e0cea4e.aspx
    please advise me to avoid this issue
    thanks

    i have 2 server both server global catalog
    my question why warning appear only one global catalog
    please explain this.
    when test Autodiscover the test successful but when expand menu
    i am found some error :
    Attempting to test potential Autodiscover URL https://Mydomain.com:443/Autodiscover/Autodiscover.xml
    Testing the SSL certificate to make sure it's valid.
    Validating the certificate name.
    Certificate name validation failed 

  • Best Practice - Copying styles & skin from OBIEE 11.1.1.5.x to 11.1.1.7.x

    Hi,
    By copying the stye (s_blafp) and skin (sk_blafp) file from OBIEE 11.1.1.5.x to 11.1.1.7.x. What are all issues we will be facing ? and is it a best practice ?
    Thanks in Advance!
    Satheesh

    http://hekatonkheires.blogspot.com/2013/08/custom-style-and-skin-in-obiee-11117.html

  • How to use best practices installation tool?

    Hello!
    can anyone share me some useful links/doc which can guide me how to use best practices installation tool (/SMB/BBI) ?
    any responses will be awarded,
    Regards,
    Samson

    hi,
    will you please share the same ?
    thanks in advance

  • Using Best Practices personalization tool

    hello,
    We wish to usr the Best Practices personalization tool for customer specific data sor baseline package.
    I do not understand from the documantation if its possible to use it after  installing the Baseline or its have to be done simutanioslly (meenung the personalized data have to be ready in the files before stsrting the baseline installaion)?!
    Thank you
    Michal

    Hi,
    Please Ref:
    http://help.sap.com/bp_bl603/BL_IN/html/index.htm
    Your personalized files to be done before implementaion as you will be using the file during the installation process.
    The xml file and the txt files you creat from the personalization tool is used to upload the scenario to the system, or else it will upload the default.
    Also ref note 1226570 ( here i am refering to IN ) you can check the same for other county version also.
    Thanks & Regards,
    Balaji.S

  • Best Practice, naming conventions and Ownership of accounts NEWBIE

    Hi Guru's please be gentle with me, I'm a sales manager in the UK and have been asked to check for best practices in naming accounts and who should own accounts in CRM 2011?For example I have Accounts with several sites and many contacts? How should I name
    these and who should own these? The office manager or the sales account manager that handles sales directly?
    Please help, I'm getting stressed. I think these are very simple for such a bunch of super gurus...

    Hello iBrummie,
    Regarding the Accounts and their sites, you can always use accounts and sub-accounts. To achieve this you should create the main Account in CRM and afterwards you can create the sites using the account entity as well and afterwards linking them with the
    main account using the "Parent Account" lookup field.
    About the ownership, CRM's security model works essentially with:
    Business Units
    Teams
    Users
    Security Roles
    This depends entirely on the way your company works but what I would do (assuming that the accounts information is shareable in your company) is to make the sales accounts the owners of the records and provide read/write access at a business unit level to
    the office managers.
    Here is some more info on the matter: 
    Security concepts for Microsoft Dynamics CRM
    How role-based security can be used to control access to entities in Microsoft Dynamics CRM
    Please mark as answer if I managed to help you.
    Regards,
    Pedro

  • Standards or Best Practices on limits to Adobe LiveCycle forms as data input tools?

    We are trying to determine whether we should be using Adobe LiveCycle forms for our user input pages, or use a web input form (jsf) that is part of the application server. The data captured would then be rendered in multiple output forms. (printing, etc) We've proofed that it can be done, but our experience so far is that to achieve the complex business rules, validations and calculations, there is a lot of javascript code to write. Has there been any industry assessment or best practices on determining when it is best to use a web form vs Adobe Livecycle form for capturing user input? especially when there is a lot of strict validation and business rule, backend data calls, etc?   Our input requirements also differ significantly from our output requirements- where we will definitely be using LiveCycle forms.

    It sounds like you need use the adobe schema in MII so the output of your transaction matches it.
    Regards,
    Jamie

  • Tips and best practices for translating C into LabVIEW? SERIOUS newbie...

    I need to translate a C function into LabVIEW.  This will be my *first* LabVIEW project.  I've been reading some tutorials, and I'm still struggling to get my brain out of "C/C++ mode" and learn the LabVIEW paradigms.
    Structurally, the function that I need to translate gets called from a while-loop and performs a bunch of mathematical calculations. 
    The basic layout is something like this (this obviously isn't the actual code, it just illustrates the general flow control and techniques that it uses).
    struct Params
    // About 20 int and float parameters
    int CalculateMetrics(Params *pParams,
    float input1, float input2 [etc])
    int errorCode = 0;
    float metric1;
    float metric2;
    float metric3;
    // Do some math like:
    metric1 = input1 * (pParams->someParam - 5);
    metric2 = metric1 + (input2 / pParams->someOtherParam);
    // Tons more simple math
    // A couple for-loops
    if (metric1 < metric2)
    // manipulate metric1 somehow
    else
    // set some kind of error code
    errorCode = ...;
    if (!errorCode)
    metric3 = metric1 + pow(metric2, 3);
    // More math...
    // etc...
      // update some external global metrics variables  
    return errorCode;
    I'm still too green to understand whether or not a function like this can translate cleanly from C to LabVIEW, or whether the LabVIEW version will have significant structural differences. 
    Are there any general tips or "best practices" for this kind of task?
    Here are some more specific questions:
    Most of the LabVIEW examples that I've seen (at least at the beginner level) seem to heavily rely on using the front panel controls  to provide inputs to functions.  How do I build a VI where the input arguments(input1, input2, etc) come as numbers, and aren't tied to dials or buttons on the front panel?
    The structure of the C function seems to rely heavily on the use of stack variables like metric1 and metric2 in order to perform calculations.  It seems like creating temporary "stack" variables in LabVIEW is possible, but frowned upon.  Is it possible to keep this general structure in the LabVIEW VI without making the code a mess?
    Thanks guys!

    There's already a couple of good answers, but to add to #1:
    You're clearly looking for a typical C-function. Any VI that doesn't require front panel opening (user interaction) can be such a function.
    If the front panel is never opened the controls are merely used to send data to the VI, much like (identical to) the declaration of a C-function. The indicators can/will be return values.
    Which controls and indicators are used to sending data in and out of a VI is almost too easy; Click the icon of the front panel (top right) and show connector, click which control/indicator goes where. Done. That's your functions declaration.
    Basically one function is one VI, although you might want to split it even further, dont create 3k*3k pixel diagrams.
    Depending on the amount of calculations done in your If-Thens they might be sub vi's of their own.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practice to move things between various environments in SharePoint 2013

    Hi All SharePoint Gurus!! - I was using SP deployment wizard to move Sites/lists/libraries/items etc. using SP Deployment Wizard (spdeploymentwizard.codeplex.com) in SP 2010. We just upgraded to SP 2013. I have few Lists and Libraries that I need to push
    into the Staging 2013 and Production 2013 environment from Development 2013 environment. SP Deployment Wizard  is throwing error right from the startup. I checked SP 2013 provides granular backups but is restricted to Lists/Library level. Could anybody
    let me know if SP Deployment Wizard works for 2013? I love that tool. Also, Whats the best practice to move things between various environments?
    Regards,
    Khushi
    Khushi

    Hi Khushi,
    I want to let you know that we built
    SharePoint Migration tool
    MetaVis Migrator that can copy and migrate to and from on-premise or hosted SharePoint sites. The tool can copy entire
    sites with sub-site hierarchies, content types, fields, lists, list views, documents, items with attachments, look and feel elements, permissions, groups and other objects - all together on at any level of granularity (for
    example, just lists or just list views or selected items). The tool preserves created / modified properties, all metadata and versions. It looks like Windows Explorer with copy/paste and drag-n-drop functions so it is easy to learn. It does not require any
    server side installations so you can do everything using your computer or any other server. The tool can copy the complete sites or just individual lists or even selected items. The tool also supports incremental or delta copy based on the previous migrations.
    The tool also includes Pre-Migration Analysis that helps to identify customizations.
    Free trial is available:
    http://www.metavistech.com . Feel free to contact us.
    Good luck with your migration project,
    Mark

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

Maybe you are looking for

  • My ipad will no longer connect to my wi-fi

    my ipad will no longer connect to my wi-fi, nothing has changed i even reset my ipad, now it will not take my security code any suggestions.  it worked yesterday and i even used it today at the hospital on their wi-fi, now i am home and it will not c

  • File-PI_File Interface with additional feature

    Dear All, We have already implemented successfully File-PI-File scenario in which files are picked from a certain folder and is converted into a different format and is sent to another folder. Now I want to add another feature to the scenario that a

  • Ipod to car with AUX port using dock or HP jack is quiet

    hi all, ive got an aux connector in my car and connecting it to the ipod headphone port OR using the dock line out gives very quiet sound, i end up having the car hifi very loud. id rather not use volume adjust in itunes to raise the file volumes as

  • How to Remove Shutdown Prompt

    Is there any way to either 1) completely remove/disable the prompt that appears when selecting Shutdown from the Apple Menu or 2) reduce the time before it automatically shuts down? It's not a big deal, but when I hit shutdown, I mean shutdown, not "

  • Installing the SAP Best Practices for Pharmaceuticals

    Hi!! Is it necessary to install SAP Best Practices for Pharmaceuticals on different clint os same SAP SERVER? Best Regards, Ajit Dubal.