Developers access to apps database user(Best Practice)?

Hello all,
I'm trying to understand a developers need to have access to the "apps" database user. The arguent can be made that the need is there for all development efforts, but is that really the case? Should all updates/changes by the "apps" user be executed by the apps dba even in the development environment? I'm trying to get a better understanding of how other organizations are set up. Our shop currently allow developers free raign to the apps user, but there are known security risks in doing so with "apps" being such a powerful user of the database environment.
Thanks in advance and any recommendations(Best Practices) will be greatly appreciated.

Hi,
We usually give developers access to APPS schema on the development instances. The access to this schema (and other schemas) is revoked on the UAT/PROD (and other instances where the data is not sanitized). When giving such access we are not much worried about the data as much as we are about the objects, but this is not a big issue as we can recover those objects from other instances. Some organizations do not provide their developers with access to APPS schema on the development instances and all the tasks are done by the Apps DBA team. Another approach would be creating a read only APPS schema (search the forum for details) to allow developers to view the objects/data without bugging the DBAs with such routine tasks.
Thanks,
Hussein

Similar Messages

  • Creating web page now, want mobile app later.  Best practices?

    I've been writing actionscript on and off for a long time now, but have always used the Flash IDE.  I'm hoping to build my next project using Flex Builder because I'm more of a code jockey than a designer.  I've got Flex Builder 3.
    I'm involved with a modest mobile game project.  We're hoping to build a simple game and it seemed logical to do a quick-and-dirty version of the game in actionscript so the project members and a select audience might be able to evaluate the game dynamics in a browser before we spend a gazillion dollars building the game for iPhone and Android platforms.
    It is my sincerest hope that I might develop my code in Actionscript and MXML and use this code to build mobile apps for iPhone, iPad, and Android without having to rewrite everything in Objective C or Java or whatever other language might be in play at the time we get it finished.
    Q1: Is this possible?
    Q2: Can anyone sketch out for me an overview of the process whereby one exports a Flex project to a mobile app platform?
    Q3: Can any seasoned developers tell me the big "gotchas" to watch out for?  For instance, I'd hate to incorporate a component that would not export to one of the mobile platforms.
    Any help would be greatly appreciated.

    Thanks for your helpful post.  That's encouraging to know that I might be able to get the Flex 4.5 SDK without upgrading my IDE.  I'm a big fan of Eclipse and am considering trying to just use Eclipse with a Flex plugin. I certainly hope I might be able to download the latest eclipse and install a 4.5SDK plugin.  Might as well shoot straight for the latest SDK, right?  It would be great to avoid the $300 investment.
    From your post, it sounds as though the Flex 4.5 sdk will be required for one to access the mobile phone features (gps, camera, accelerometer, etc.) which makes sense.  The ability to go actionscript->mobile was only recently announced.  Personally I think this capability is brilliant on Adobe's part.
    I'm still wondering about the "best practices" aspect. Obviously the idea of a mouseover does not apply in the context of a touchscreen.  As I recall, the events exposed by Cocoa Touch (or other touch screen APIs/libraries) don't have the same Mouse/Pointer events as the AS3 that I know.  The point of this post (which is working well so far thanks for the input) is to try and make sure that I avoid building an app which uses features unavailable in a mobile context.  I'm starting to wonder if skipping the web page advice is being given because they mobile and web page paradigms are so different.
    I still assert that the web page stage would be much easier to bring in testers -- meaning non-technical people who wouldn't know the first thing about installing an app on their phone. I'm talking about an audience of dozens or possibly hundreds and just can't provide them all guidance about getting the app installed.

  • 1 RH Project, 2 Users - Best Practices

    Hi,
    I've been the sole writer at our company for 6+ years. I finally have someone to help me; however, this has introduced a new challenge. We will *both* be working on the same RH project, and we will be using VSS for source control. I'm not sure how we should be 'sharing' this single RH project.
    Does anyone have any best practices for when working in this type of situation?
    I have questions such as:
    When the other person is creating Index keywords, what if I have files checked out - how will this affect the adding of keywords?
    When the other person creates new snippets or new user-defined variables, should he immediately check them in and let me know so that I can do a get latest and have the new snippets/variables in my project?
    How do we manage both of us working on the same project, and having the need to check in/check out files, create new topics, etc. - what should our 'workflow' be?
    Thanks in advance for ANY assistance/tips that any of you can provide!

    I like Author Care's golden rule: keep things simple and robust. This topic touches on the three basic ways of sharing help authoring tasks. In order of complexity:
    1.  Serial authoring. If you don't need to have both authors in the project at the same time, you can simply take turns working on the project. Just pass the files back and forth as needed. This is the most simple and robust approach.
    2. Merging projects. If you need concurrent authoring, then, yes, this is a simpler and more robust approach than source control. However, this only works if you can partition your material and your work assignments into two or more clearly-delineated parts. Merging projects can be a great solution, but it doesn't fit all cases.
    3. Source control. If multiple authors need concurrent access to the same material, then source control is the simplest answer.
    Here are some tips and observations, based on my experience with RoboSource Control, in no particular order:
    1. Source control works best on small-to-medium-sized projects. Large ones can be unstable.
    2. Set it up to restrict file checkouts to one author only. Allowing two authors to work on a single topic simultaneously is bad.
    3. If possible, try to work in different areas of the project that don't touch. Remember that a single change in one topic can ripple out to many related topics. (For example, if you change the filename of a topic, every link to that topic must be changed.) If someone else is working in one of those related topics, you will not be able to complete your initial change.
    4. Make backup copies of your projects regularly, even though they are in source control.
    5. Create an administrator account to be used just for that purpose. Don't use that account for regular authoring. Don't give everyone administrative privileges.
    6. Designate one person as the administrator. Have at least one backup administrator. These will be the people who set up user accounts, override checkouts ("I need that file, and Joe is on vacation!"), resurrect old files, sort out source control conflicts, etc.
    7. Check in files as soon as you're done with them. Don't leave them checked out longer than necessary.
    8. If you have large projects, your virus scan utility can really degrade performance during certain operations, such as the initial "get" of project files. If this is the case, you might be able to configure your antivirus program to be friendlier to these source control activities.
    9. The help authors should stay in close communication. Let each other know what you're doing, especially if you are doing something radical like moving folders around. Be ready to check something back in if someone else needs it.
    10. Give a lot of thought to how your project is structured. Consider file structure, naming conventions, etc.
    11. Some actions are more source control intensive than others. (Moving, deleting or renaming folders are biggies.) Your project is vulnerable while these changes are in progress. If something goes wrong before the process is complete, you can end up with a mess on your hands. For example, let's say there's a network glitch while you're moving a folder, interrupting your connection with source control. You can end up with RH thinking that the folder is in one place, while source control thinks it's in another. The result is broken links and missing files. Time for the administrator to step in and sort things out. This is almost never a problem for small projects. It becomes a real issue for large projects.
    12. If you're getting close to a deadline, DO NOT pick that time to reorganize and rename files and folders.
    13. Follow the correct procedure for adding a project to source control. Doing it wrong will really mess you up. Adding a project to RoboSource Control is easy. I can't speak for other source control solutions.
    14.  You might find it necessary to rebuild your cpd file more often than with non-source controlled projects.
    15.Have I mentioned lately that you should back up your source files?
    HTH,
    G

  • Database Administration - Best Practices

    Hello Gurus,
    I would like to know various best practices for managing and administering Oracle databases. To give you all an example what I am thinking about - for example, if you join a new company and would like to see if all the database conform to some kind of standard/best practices, what would you look for - for instance - are the control files multiplexed, are there more than one member for each redo log group, is the temp tablespace using TEMPFILE or otherwise...something of that nature.
    Do you guys have some thing in place which you use on a regular basis. If yes, I would like to get your thoughts and insights on this.
    Appreciate your time and help with this.
    Thanks
    SS

    I have a template that I use to gather preliminary information so that I can at least get a glimar of what is going on. I have posted the text below...it looks better as a spreedsheet.
    System Name               
    System Description               
         Name      Phone     Pager
    System Administrator               
    Security Administrator               
    Backup Administrator               
    Below This Line Filled Out for Each Server in The System               
    Server Name               
    Description (Application, Database, Infrastructure,..)               
    ORACLE version/patch level          CSI     
              Next Pwd Exp     
    Server Login               
    Application Schema Owner               
    SYS               
    SYSTEM               
         Location          
    ORACLE_HOME               
    ORACLE_BASE               
    Oracle User Home               
    Oracle SQL scripts               
    Oracle RMAN/backup scripts               
    Oracle BIN scripts               
    Oracle backup logs               
    Oracle audit logs               
    Oracle backup storage               
    Control File 1               
    Control File 2               
    Control File 3                    
    Archive Log Destination 1                    
    Archive Log Destination 2                    
    Datafiles Base Directory                    
    Backup Type     Day     Time     Est. Time to Comp.     Approx. Size
    archive log                    
    full backup                    
    incremental backup                    
    As for "Best" practices, well I think that you know the basics from your posting but a lot of it will also depend on the individual system and how it is integrated overall.
    Some thoughts I have for best practices:
    Backups ---
    1) Nightly if possible
    2) Tapes stored off site
    3) Archives backed up through out day
    4) To Disk then to Tape and leave backup on disk until next backup
    Datafiles ---
    1) Depending on hardware used.
    a) separate datafiles from indexes
    b) separate high I/O datafiles/indexes on dedicated disks/lungs/trays
    2) file names representative of usage (similar to its tablespace name)
    3) Keep them of reasonable size < 2 GB (again system architecture dependent)
    Security ---
    At least meet DOD - DISA standards where/when possible
    http://iase.disa.mil/stigs/stig/database-stig-v7r2.pdf
    Hope that gives you a start
    Regards
    tim

  • Metadata Loads (.app) - What is best practice?

    Dear All,
    Our metadata scan and load duration is approximately 20 mins (full load using replace option). Business hfmadmin has suggested the option of partial dimension loads in an effort to speed up the loading process.
    HFM System Admins prefer Metadata loads with replace option as there seems to less associated risk.
    Using partial loads there appears to be risk to cross dimension integrity checking, changes are merged, potentially duplicating of members when moved in Hierarchy.
    Are there any other risk with partial loads?
    Which approach is considered best practice?

    When we add new entities to our structure and load them with the merge option, they will always appear on the bottom of the structure. But when we use the replace option they will appear in the order that we want it. For us, and for the user friendlyness we always use the replace option. And for us the Metadata-load usually takes at least 35 minutes. Last time - 1.15...

  • Database creation best practices.

    Hi,
    We are planning to setup new database, oracle-10G on Sun and AIX. It is a datawarehose environment.
    Can anyone please share me the documents which speaks about best practices to be followed during database creation/setup. I googled and got some douments but not satisfied with them, so thought of posting this query.
    Regards,
    Yoganath.

    YOGANATH wrote:
    Anand,
    Thanks for your quick response. I went thru the link, but it seems to be a brief one. I need a sort of crisp/summary document for my presentation, which speaks about:
    1. Initial parameter settings for an datawarehouse to start with, like block_size, db_file_multiblock_read_count, parallel server etc...
    2. Memory parameters, SGA, PGA (say for an sever with 10GB RAM).
    3. How to split tablespaces, like Large, small.
    If someone has a just a crisp/outline document which speaks about the above mentioned points, it will be grateful.
    Regards,
    YoganathYou could fire up dbca, select the 'data warehouse' template, walk through the steps, and at the end do not select 'create a database' but simply select 'create scripts', then take a look at the results, especially the initialization file. Since you chose a template instead of 'custom database' you won't get a CREATE DATABASE script, but you should still get some stuff genned that will answer a lot of the questions you pose.
    You could even go so far as to let dbca create the database. Nothing commits you to actually using that DB. Just examine it to see what you got, then delete it.
    Edited by: EdStevens on Feb 10, 2009 10:41 AM

  • ASM and Databases Instances (best practices)

    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Any comment, advises ?
    Bests Regards
    Den

    user12067184 wrote:
    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Hi Den ,
    It is not advisable to have 25 databases in single RAC . Instead of databases , you can also think of creating different schemas in same database.
    For ASM best practice please follow :
    ASM Technical Best Practices [ID 265633.1]
    Regards
    Rajesh

  • [iPhone SDK] Database/SQLite Best Practices

    Taking a look at the SQLiteBooks example, the idea of "hydrating" objects on-the-fly doesn't seem like a best practice to me. If you have a large number of books, for example, and they all need to be hydrated at the same time, I would want to lump the hydrating statements into one large query instead of each Book doing a separate query for it's own data.
    I'm imagining these two scenarios:
    1. A large query that loops through the rows of it's result set and creates Book objects from them, adding them to an array.
    2. A query to get the primary keys, creating Book objects with only that data. When the book hydrates, it queries the rest of the data specifically for itself. (SQLiteBooks example)
    I can see how the hydrating of method 2 would be easier to manage the memory stored in each Book object, but I'm really concerned with the speed it would take hundreds of Books to hydrate themselves.

    If you know you're going to need many of the objects hydrated at the same time you may want to create some mechanism that allows you to do a single query and hydrate everything at once. In the case of SQLiteBooks, the hydrate method is only called when the user clicks through to the details view for a particular item, so hydrating objects en masse is not necessary.
    I'd be very careful to avoid premature optimization here. We're working with very limited resources on these devices, and it is definitely important to optimize your code, but if you're dealing with a large number of items you could easily shoot yourself in the foot by hydrating a bunch of items at once and using up all your memory. If you dehydrate your objects when you receive a memory warning (as I believe the SQLiteBooks example does) you could end up thrashing - reading all your data from the DB, hydrating your objects, receiving a memory warning, dehydrating, and repeating. As the previous reply states, indexed lookups in SQLite are extremely fast. Even on the iPhone hardware you can probably do hundreds of lookups per second (I haven't run a test, but that'd be my guesstimate).

  • Table Onwers and Users Best Practice for Data Marts

    2 Questions:
    (1)We are developing multiple data marts that share the same Instance. We want to deny access to the users when tables are being updated. We have one generic user (BI_USER) with read access through one of the popular BI Tools. The current (first) data mart we denied access by revoking the privilege to the BI_USER, however going forward with other data marts the tables will get updated on a different schedule and we do not want to deny access to all the data marts. What is the best approach?
    (2) What is the best Methodology for table ownership of tables in different data marts that share tables across marts? Can we create one generic ETL_USER to update tables with different owners?
    Thanx,
    Jim Masterson

    If you have to go with generic logins, I would at least have separate generic logins for each data mart.
    Ideally, data loads should be transactional (or nearly transactional), so you don't have to revoke access ever. One of the easier tricks to accomplish this is to load data into a shadow table and then rename the existing table and the shadow table. If you can move the data from the shadow table to the real table in a single transaction, though, that's even better from an availability standpoint.
    If you do have to revoke table access, you would generally want to revoke SELECT access to the particular object from a role while the object is being modified. If this role is then assigned to all the Oracle user accounts, everyone will be prevented from viewing the table. Of course, in this scenario, you would have to teach your users that "table not found" means that the table is being refreshed, which is why the zero downtime approach makes sense.
    You can have generic users that have UPDATE access on a large variety of tables. I would suggest, though, that you have individual user logins to the database and use roles to grant whatever ad-hoc privileges users need. I would then create one account per data mart, with perhaps one additional account for the truely generic tables, that own each data mart's objects. Those users would then grant different roles different database privileges, and you would then grant those different roles to different users. That way, Sue in accounting can have SELECT access to portions of one data mart and UPDATE access to another data mart without granting her every privilege under the sun. My hunch is that most users should not be logging in to, let alone modifying, all the data marts, so their privileges should reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Setting up Users - Best Practices

    03/28/2007 11:53:42 AM
    Reply | Quote | Top | Bottom
    Hi there. We are a large organization with a website
    consisting of over 40,000 pages. We have multiple users who need
    access to various parts of the website. We have discovered
    different methods of setting them up in Contribute but wonder if
    one method is better than the others.
    1. Create a separate new connection for each portion of the
    site. This, unfortunately, makes it impossible to share CSS layout
    files, images, and library items at the root of the website.
    2. Create a single connection to the server and create
    mutliple roles - most likely named by the user's name - that limits
    directory access. Worried this might create an unwieldly list of
    users. This opens up shared assets, though, which will help with
    maintaining site standards.
    We also tried creating a single connection to the server and
    creating new roles - named according to which directory was being
    given access to - but quickly discovered that a user can only be
    assigned to one role.
    Has anyone else run across this?
    Thanks.

    Well, I definitely think that if you're going to maintain any
    level of sanity to your web file organization, you'll have to set
    up roles according to logical groupings and then assign folks to
    those roles. I would not mess with creating a variety of
    connections because that could easily turn into an authentication
    nightmare. Stick to one encrypted connection.
    The nature of your business organization should dictate how
    those role groups are created, for example a role for HR, Finance
    or Sales with a corresponding folder on the site. But it might be
    possible to do the opposite and create roles based on how your
    website is organized. For example, you might have a "Sales" section
    on your web site that folks from different departments need to have
    edit access to, and another section on your site that deals with
    "Administration" that may necessitate adding folks from, again,
    different departments within the organization. This is a bit
    opposite of the normal routine of setting up role groups based on
    the groups that already exist in the organization (HR, Finance,
    Sales, etc.).
    For us, it was fairly easy to create the groups because we're
    a school system and the site is organized by school sites.
    It would be great if folks could be assigned to multiple
    roles, but it could get messy with the "cascading" of permissions
    (i.e. permissions in this role but NOT in that role, however the
    second role is a sub-group of role one....see what I mean?).
    In the end, you may just need to graph out the type of
    organization you need in order to meet the needs of the site and/or
    organization. Then replicate that in how you set up roles in CPS.
    Hope this helps!

  • Windows Azure SQL Databases Statistics Best practices

    ON SQL Azure is it a good practice to have Statistics auto-update disabled? or otherwise.
    Pl do not compare Azure to on premise SQL Engine.. Those who have worked on Azure know what i mean..
    It is a pain to maintain the indexes specially if they are have BLOB Type Columns.. No Index online rebuilds are allowed in Azure. I was targetting statistics as i see the data being frequently updated.. so maybe i can have the developers update the stats
    as soon as they do major update/insert or i can have a job that can do it on weekly basis if i turn off the suto stats update.
    I execute a Stats FULLSCAN Update every week, but i think it is overwritten by the Auto update stats .. So Now back to my question does anyone have any experience with turning off stats on Azure.. (any Benefits)

    You can't disable auto stats in WASD.  They're on by default and have to stay that way.
    Rebuilding indexes is possible, but you have to be careful how you approach it.  See my blog post for rebuilding indexes:
    http://sqltuna.blogspot.co.uk/2013/10/index-fragmentation-in-wasd.html
    As a rule I wouldn't have LOB columns as part of an index key - is that what's causing you issues?
    Statistics work the same as on-premises, in that they are triggered when a certain threshold of changes is reached (or some other triggers).  That's not a bad thing though, as it means they're up to date.  Is there any reason you think this is
    causing you issues?

  • Lync backend databases and Lync local databases placement best practices

    We are deploying Lync 2013 for 30,000 Lync users across 2 pools in 2 datacenters. Dedicated SQL server for Lync to host all databases for FE, PC and monitoring roles.
    Can anyone provide guidance around disk drives for SQL databases and local databases?
    Lync backend databases
    Planning for Lync Server 2013 requires critical thinking about the performance impact that the system will have on your current infrastructure. A
    point of potential contention in larger enterprise deployments is the performance of SQL Server and placement of database and log files to ensure that the performance of Lync Server 2013 is optimal, manageable, and does not adversely affect other database
    operations.
    http://blogs.technet.com/b/nexthop/archive/2012/11/20/using-the-databasepathmap-parameter-to-deploy-lync-server-2013-databases.aspx
    Is it recommended to place all Lync DBs on one drive and logs on one drive or separate onto multiple drives using Databasemappath?
    Lync 2013 local databases 
    In the Capacity Planning Guidance Microsoft
    describes that the disk requirements for a Lync 2013 front end server, given our usage model, is eight disks configured as two drives.
    One drive will use two disks in RAID 1 to store the Windows operating system and the Lync 2013 and SQL Express binaries
    One drive will use six disks in RAID1+0 to store databases and transaction logs from the two SQL Express instances (RTCLOCAL and LYSS) 
    Is this enough or should we separate the local DBs and logs onto separate drives as well? Also how big do the local DBs get?

    During the planning and deployment of Microsoft SQL Server 2012 or Microsoft SQL Server 2008 R2 SP1 for your Lync Server 2013 Front End pool, an important consideration is the placement of data and log files onto physical hard disks for performance. 
    The recommended disk configuration is to implement a 1+0 RAID set using 6 spindles. Placing all database and log files that are used by the Front End pool and associated server roles and services (that is, Archiving and Monitoring Server, Lync Server Response
    Group service, Lync Server Call Park service) onto the RAID drive set using the Lync Server Deployment Wizard will result in a configuration that has been tested for good performance. The database files and what they are responsible for is detailed in the
    following table.
    http://technet.microsoft.com/en-us/library/gg398479.aspx
    Microsoft's technet recommendation contradicts the blog recommendation for Lync 2010.

  • Question about database structure - best practice

    I want to create, display, and maintain a data table of loan
    rates. These
    rates will be for two loan categories - Conforming and Jumbo.
    They will be
    for two loan terms - 15year and 30year. Within each term,
    there will be a
    display of -
    points (0, 1, 3) - rate - APR
    For example -
    CONFORMING
    30 year
    POINTS RATE APR
    0 6.375 6.6
    1 6.125 6.24
    3 6.0 6.12
    My first question is -
    Would it be better to set up the database with 5 fields
    (category, term,
    points, rate, apr), or 13 fields (category, 30_zeropointRate,
    30_onepointRate, 30_threepointRate, 30_zeropointAPR,
    30_onepointAPR,
    30_threepointAPR, 15_zeropointRate, 15_onepointRate,
    15_threepointRate,
    15_zeropointAPR, 15_onepointAPR, 15_threepointAPR)?
    The latter option would mean that my table would only contain
    two records -
    one for each of the two categories. It seems simpler to
    manage in that
    regard.
    Any thoughts, suggestions, recommendations?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================

    Thanks, Pat. I'm pretty sure that this is a dead-end
    expansion. The site
    itself will surely expand, but I think this particular need
    will be
    informational only....
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "Pat Shaw" <[email protected]> wrote in message
    news:[email protected]...
    > But if the site ever wants to expand on it's
    functionality etc. it can be
    > very difficult to get round a de-normalised database.
    You can find that
    > you have tied yourself in knots and the only solution is
    to go back and
    > redesign the database which often includes major
    redesigning of the
    > fron-end too.
    >
    > If you are confident that this will not be the case then
    go with your
    > initial thoughts but don't be too lenient just in case.
    Leave yorself a
    > little scope. I always aim for 3rd normal form as this
    guarantees a robust
    > database design without being OTT.
    >
    > Pat.
    >
    >
    > "Joris van Lier" <[email protected]> wrote in
    message
    > news:[email protected]...
    >>
    >>
    >> "Murray *ACE*"
    <[email protected]> wrote in message
    >> news:[email protected]...
    >>> I want to create, display, and maintain a data
    table of loan rates.
    >>> These rates will be for two loan categories -
    Conforming and Jumbo.
    >>> They will be for two loan terms - 15year and
    30year. Within each term,
    >>> there will be a display of -
    >>>
    >>> points (0, 1, 3) - rate - APR
    >>>
    >>> For example -
    >>>
    >>> CONFORMING
    >>> 30 year
    >>> POINTS RATE APR
    >>> ----------- --------- ------
    >>> 0 6.375 6.6
    >>> 1 6.125 6.24
    >>> 3 6.0 6.12
    >>>
    >>> My first question is -
    >>>
    >>> Would it be better to set up the database with 5
    fields (category, term,
    >>> points, rate, apr), or 13 fields (category,
    30_zeropointRate,
    >>> 30_onepointRate, 30_threepointRate,
    30_zeropointAPR, 30_onepointAPR,
    >>> 30_threepointAPR, 15_zeropointRate,
    15_onepointRate, 15_threepointRate,
    >>> 15_zeropointAPR, 15_onepointAPR,
    15_threepointAPR)?
    >>>
    >>> The latter option would mean that my table would
    only contain two
    >>> records - one for each of the two categories. It
    seems simpler to
    >>> manage in that regard.
    >>>
    >>> Any thoughts, suggestions, recommendations?
    >>
    >> In my opinion, normalizing is not necessary with
    small sites, for example
    >> the uber-normalized database design I did for the
    telcost compare matrix
    >> (
    http://www.artronics.nl/telcostmatrix/matrix.php
    ) proved to be totally
    >> overkill.
    >>
    >> Joris
    >
    >

  • Best practice in Infoprovider & Query design for access by BO Universe

    Hello Experts,
    Are there any best practices identified by practitioners or suggested by SAP for development of Infoprovider and queries for access by BO Universe.
    Best practices should be from the prospective of performance, design simplicity, adaptability to change etc.
    Appreciate your help.
    Regards,
    Pritesh.
    Edited by: pritesh prakash on Jul 19, 2010 10:51 AM

    Thanks Suresh.
    My project plan is to build Infocubes & queries which will be then used to build Universe upon it. Thus I am looking for do's & dont's while designing infocubes & queries such that there wont be any issues(performance or other) when accessed by Universe built on it.
    Hope I have made it more clear now.
    Regards,
    Pritesh.

  • Access Point Best Practice

    I have 5 access points, what is the best practice about the configuration of the channel of the access points, all access in the same channel, all access in different channel ? (802.11b/g).
    Thanks

    Hi!
    Use non overlapping channel 1-6-11 on 3
    consecutive AP's.
    eg.
    AP1-Channel 1
    AP2-Channel 6
    AP3-Channel 11
    AP4-Channel 1
    AP5-Channel 6
    plzz take a look:
    http://www.cisco.com/en/US/products/hw/wirele
    ss/ps441/products_tech_note09186a00800a86d7.shtml#nonover
    HTH
    -Jai

Maybe you are looking for

  • How do i display multiple pages on the iMac screen in safari

    When I open a new web pages in Safari, it closes the current page.  I want to be able to open another (i.e., split screens) in safari without closing the current page so I can work between the pages.  I'm new to the iMac, but I had this feature on my

  • Is this Possible?

    Is it possible instead of using Boot Camp to use an external disk to boot Windows Vista or can you only use Boot Camp to run Windows on the mac not as a virtual machine?

  • Assigning authorization role to position in PP02 (SRM 5.0) not working

    Hi, We've run into a problem in our SRM 5.0 system that we're not sure how to solve. We defined a role where we only set the BBP_APPROVAL_LIMIT attribute in the Personalization tab. It has no other transaction authorizations. When we assign this role

  • Client_host('D:\ipconfig.bat');

    Hi dear All Hope that you all will be at the best of your health and knowlege... i am facing a problem while running the oracle forms on the web... i want to execute a .bat file through Client_Host command... while i run the form locally and use the

  • WAD print error

    Hi, I have created multiple webtemplates with BEx WAD. A couple of them have print buttons and are able to print. When I want to group them inside one webtemplate with tabbed pages, I am no longer able to print to PDF. My pdf is empty or I get the im