Dba vs developer

Experts,
Please suggest what stream to opt : dba or developer.
And why ?
what key skill should a developer have?
and what key traits dba must have ?
I really appreciate and welcome every comment to this thread.
Thanks.

There are two words
1. Passion
2. Need
So which one you are looking for.
Passion towards something comes only when you experience it. You are passionate about becoming a DBA or a Developer can be identified only when you taste both of them. May be you can even feel both of them are nonsense. Nothing wrong in it. In that case you can move on and try for some other stuff.
On the other hand Need. You need to have a job. You need money. You need work. So in that case you should look out for jumping into something which will get you there and full fill your needs. In that case i would say a Development job is much more easier to find than that of a DBA. Because if you take any system there is a huge difference in the number of DBA and Developer that system has. And thats the reason why the DBA job is generally priced high.
So moving on to you question of what skill you must posses as a DBA or a developer.
At the initial stages i think its pity much the same. You must know the basics of RDBMS. Untill you are not strong in that no use in going forward. Then coming into Oracle you must know how oracle works. How it is been designed. Its architecture, every thing related to it must be learned.
Most of the time developer think its all about writing SQL and PL/SQL. Its not. Its all about how oracle really works. Once you got to that point then you decide which direction to proceed. When you reach that point you would have a clear picture of both the activities. and you will be able to make a decision.
Thanks,
Karthick.

Similar Messages

  • Roles and responsiblities of oracle dba in development team

    What should be the roles and responsiblities of oracle dba in development team?
    Does Application dba should have oracle user credentials on db box?

    Hi, Application DBA work as like production DBA, while resolving issue SLA would not apply for them . Apart from this developement team pressure will be there.
    These are points remembered.
    Creating test Db for testing environment,
    Schema Replication of POC
    replication the DB for interface setup .
    User , Space management.
    Roles and Security management
    Space Forecasting -this will be useful when you are estimating for storage
    need to give application set up to Production DBA with proper specification.
    maintaining the schema changes
    Ensure that right script shas to provide the Production DBA team .
    Deployment of the application.
    performance tuning..
    All environment memory /CPU statistisc need to check by regular interval.If any issues need to escalte to INFRASTRUCTURE team
    HTC
    tippu

  • Can I do OCA in DBA and Developer Track?

    I have cleared 1Z0-007 and 1Z0-031 recently and that earns me a OCA in DBA Track. Now, I would like to do another OCA in Developer Track. So, will just 1Z0-147 be sufficient as I have already cleared 1Z0-007?
    Also, is there any Hands-On requirement for Oracle Advanced PL/SQL Certified Professional?
    Thanks and Regards,
    Kaushik

    Hi,
    I have cleared 1Z0-007 and 1Z0-031 recently and that earns me a OCA in DBA Track. Congrats!
    Now, I would like to do another OCA in Developer Track. So, will just 1Z0-147 be sufficient as I have already cleared 1Z0-007?Yes.
    Also, is there any Hands-On requirement for Oracle Advanced PL/SQL Certified Professional?No.
    Oracle PL/SQL and Oracle Forms Developer
    http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=155
    Regards,
    Hussein

  • Career Development Insight- Consulting

    Hello everyone, first off I would like to introduce myself on the OTN Community. My name is Thien-Phong Dinh and I am currently a junior at the University of Houston's Bauer College of Business. I have approximately about 3 semesters left before completing my degree in Management Information Systems. I am on to this discussion looking for any insight available from those who have "been there and done that", so please do help and provide your story/insight.
    I am relatively young, going to be 22 years of age by the end of this month. I would like to get into the IT Consulting industry once I finish my degree. Reason being for consulting is that I would like to venture outside of Houston and see the world so to speak. I know off the bat that I enjoy organizing data and what this information can do in terms of making better financial business decisions. I had the opportunity this past summer to intern for a small investment bank, Global Hunter Securities, that specialized in oil field and equipment services. Anyways, long story short I was fascinated by what better organized data can be used for in terms of providing investing decisions for clients. I enjoyed designing a custom database with custom queries for GHS while my intern duration and I would like to know what is the best route of getting my foot in the door to becoming an IT Consultant whose area of expertise would be Database Administration/Architect.
    I am in the process of preparing for an Oracle Certification sometime this December and would like your insight on what you believe is the best way to get into the IT Consulting industry.
    Do I need to get certified by oracle to become an IT Consultant? If so,..
    Which certifications will help me land a career starting off as an IT Consultant for Oracle products?
    How did you get into the IT Consulting profession?

    As a general rule, Oracle consultants are expected to be individuals with a great deal of experience with the Oracle database -- not someone just starting out with it.  Consultants are often hired to come in for a short time frame to perform some job that in-house resources have been unable to resolve.  Since it's impractical for the most part for a company to have an Oracle database and not have a DBA to maintain it -- this means that the consultant presumably knows more than the in-house DBA.  Becoming a consultant therefore normally start with working as a DBA or developer for years gaining experience with Oracle.
    On my OCPrep blog, I try to give advice about getting certified in Oracle and starting/building your career working with it.  I don't currently have a post specific to becoming a consultant.  I picked the following out of the ones there as being reasonably appropriate for your question.
    Oracle Certification Prep: I know nothing about Oracle... but I want to be a DBA.
    Oracle Certification Prep: The Value (or Lack Thereof) of Oracle Certifications
    Oracle Certification Prep: What is Certification?

  • Knowledge of Hardware for a DBA

    Hello friends,
    I have started my career as a DBA few months back and I am still learning. Please help me understand whether as a DBA, a person is also supposed to know about the underlying server hardware details.
    If yes, what level of knowledge is he required to understand.
    -Jek

    890397 wrote:
    I have started my career as a DBA few months back and I am still learning. Please help me understand whether as a DBA, a person is also supposed to know about the underlying server hardware details.
    If yes, what level of knowledge is he required to understand.That depends on the environment. I have at times signed the delivery form for new h/w, removed the h/w from the boxes, build it (added memory and PCI cards) and then hook it up to the network for installing the o/s - and only then added the database s/w.
    The more you know about the environment you're in, the better. It makes you more experienced. More capable. More flexible.
    A DBA that insist that h/w, networking and the operating system are not part of the job description, is petty, small minded and short sighted.
    So how much should you know about the underlying server h/w? Everything that is available. From how to update firmware on that server to how check IPMI sensors and use the console port for lights out management.
    The day that you (as DBA or developer) stop learning, is the day that you start becoming useless.

  • Role of DBA in DW life cycle

    Hi friends,
    Wouldu pls tell me what are the basic jobs to be done by a dba in a complete dataware life cycle?
    Thanks in advance,
    Pragati.

    I would refer to the various books by Ralph Kimball for more information on this as he covers all the various roles within a data warehouse project. The Oracle Database 2 Day DBA 10g Release 2 (10.2) provides a comprehensive overview of DBA type tasks, most of which apply to any type of application (OLTP or data warehousing).
    In addition DBAs might also be asked to manage design repositories such as those required by Oracle Warehouse Builder and other ETL tools and ensure these are configured and backed up correctly.
    Much will depend on the size of the data warehouse team, the size of the project and the required roles and responsibilities. In some customers where I have worked there have been different DBAs covering development, testing, QA, prooduction environments. At other customers I have seen DBAs cover just about everything including writing deployment scripts. So there is no standard approach in my opinion.
    Hope this helps,
    Keith
    Product Management
    Oracle Warehouse Builder

  • What to learn first for DBA/DEV?

    Hi,
    I'm completely new to Oracle and RDBMS and was wondering what would be the best first steps for becoming an Oracle DBA or Developer?
    I have 11g running on Linux to play with and I've started a Database Design + Implementation module in University but want to go more in depth if possible, Some of the books I've been looking at seem to throw a lot of jargon at you!
    Many thanks
    Mike

    837287 wrote:
    Hi,
    I'm completely new to Oracle and RDBMS and was wondering what would be the best first steps for becoming an Oracle DBA or Developer?
    I have 11g running on Linux to play with and I've started a Database Design + Implementation module in University but want to go more in depth if possible, Some of the books I've been looking at seem to throw a lot of jargon at you!
    Many thanks
    Mike=================================================
    Learning how to look things up in the documentation is time well spent investing in your career. To that end, you should drop everything else you are doing and do the following:
    Go to tahiti.oracle.com.
    Drill down to your product and version.
    <b><i><u>BOOKMARK THAT LOCATION</u></i></b>
    Spend a few minutes just getting familiar with what is available here. Take special note of the "books" and "search" tabs. Under the "books" tab you will find the complete documentation library.
    Spend a few minutes just getting familiar with what <b><i><u>kind</u></i></b> of documentation is available there by simply browsing the titles under the "Books" tab.
    Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what <b><i><u>kind</u></i></b> of information is available there.
    Do the same with the SQL Reference Manual.
    Do the same with the Utilities manual.
    You don't have to read the above in depth. They are <b><i><u>reference</b></i></u> manuals. Just get familiar with <b><i><u>what</b></i></u> is there to <b><i><u>be</b></i></u> referenced. Ninety percent of the questions asked on this forum can be answered in less than 5 minutes by simply searching one of the above manuals.
    Then set yourself a plan to dig deeper.
    - Read a chapter a day from the Concepts Manual.
    - Take a look in your alert log. One of the first things listed at startup is the initialization parms with non-default values. Read up on each one of them (listed in your alert log) in the Reference Manual.
    - Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files. Go to the Network Administrators manual and read up on everything you see in those files.
    - When you have finished reading the Concepts Manual, do it again.
    Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.
    =================================

  • Information about development

    Hey gurus
    I know the role of DBA in production but i want to know the role of DBA in development .I know DBA in development responsible for creating database includes deciding init.ora parameter , creating tablespace ,distributing files acroces disk for less io.Identifiening most usable table and less usable table and placing it in appropriate drive.Can anybody help me by saying in details roles of DBA in development .Is normalizing task of DBA or nor.
    Thanks very much in Advance
    Tinku

    Did you try googling for this?
    http://www.google.com/search?hl=en&lr=&q=java+to+access+hardware+ports&btnG=Search

  • Index vs table

    Hi all ,
    oracle 11g.2 ASM with RAC under RHEL 5
    i know oracle recommended to create table in x tablespace and create index on this table in y tablespace but why ???
    what's the benifts of that ???
    thanks

    861100 wrote:
    i know oracle recommended to create table in x tablespace and create index on this table in y tablespace but why ???
    what's the benifts of that ???Was never an Oracle recommendation as far as I recall. (a lot was however written about it by "experts" and DBAs)
    There are issues such as transportable tablespaces, wanting different block sizes for index blocks versus data blocks, complex data management and so on, that raise the issue of whether one should consider using different tablespaces for indexes and data.
    But unless there are actually such issues, the easiest is to use a single tablespace. It makes space management significantly easier. It makes DBA administration easier. And it should have no I/O performance impact as I/O (ito RAID, stripe sets, etc) are dealt with at ASM level - and not at logical storage unit level (such as at tablespace level).
    My personal preference (as DBA and developer) is to have a single dedicated tablespace per logical database - so the Marketing application and schema will have a single dedicated tablespace, the HR application and schema its dedicated tablespace, etc.

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Oracle 6i forms & Oracle 9i Database server

    Hi everyone,
    i recently got appointed in a company as an Oracle DBA...i dont have ne prior experience as an Oracle DBA or developer...i have some general questions regarding Oracle 6i forms...May be u gurus can help me out.
    # We have our financial application on oracle 6i forms and oracle 9i as database server. I am trying to create an environment on my test server in order to get familiar with oracle 6i forms, i have the required softwares....whn i start the installation for Oracle 6i forms. it looks for Oracle home and selects Orahome_9 by default.
    # Then it asks me for
    * Oracle forms developer
    *Oracle forms Server
    *Oracle Reports Developer
    *Oracle Reports Server
    i ges i need oracle forms developer,so i check the option and click ok.
    # Next it asks me what type of installation id like to perform
    *Typical or Custom.
    I dont know how to proceed from here, as i dont have much idea about Oracle forms...i kno i must go through documentation.Ive already started that process but a lil advice from yall wud be a gr8 help too...Id luv to hear if u need ne further information bout the configuration...Thanks fr readin.

    hi
    i would like to prefer you Typical installations.
    please checkout the following links for documentations.
    http://www.oracle.com/technology/documentation/index.html
    http://www.oracle.com/technology/documentation/6i_forms.html
    http://www.oracle.com/technology/documentation/oracle9i.html
    i hope it will help you.
    sarah
    Edited by: SaRaH on Jul 14, 2010 4:27 AM

  • Step by step procedure for Upgrade to ECC6.0

    Hi,
    I gained a lot from this forum . Can someone please mail me at
    [email protected]
    step by step procedure for upgrade .
    Will award full points for helpful documents..
    With regards,
    Mrinal

    SAP defined a roadmap for upgrade.
    1) Project Preparation
    Analyze the actual situation
    Define the objectives
    Create the project plan
    Carry out organizational preparation for example identify the project team
    2)Upgrade Blueprint
    The system and components affected
    The mapped business processes
    The requirements regarding business data
    3)Upgrade Realization -- In this phase the solution described in the design phase is implemented in a test environment. This creates a pilot system landscape, in which the processes and all their interfaces can be mapped individually and tested on the functional basis.
    4)Final Preparation for Cutover -- Testing, Training, Minimizing upgrade risks, Detailed upgrade planning
    5)Production Cutover and Support
    The production solution upgrade
    Startup of the solutions in the new release
    Post processing activities
    Solving typical problems during the initial operation phase.
    SAP expects at least 2 to 3 months for Upgrade and that again depends on project scope and complexity and various other factors.
    STEPS IN TECHNICAL UPGRADE
    •     Basis Team will do the prepare activities. (UNIX, BASIS, DBA).
    •     Developer need to run the Transaction SPDD which provides the details of SAP Standard Dictionary objects that have been modified by the client. Users need to take a decision to keep the changes or revert back to the SAP Standard Structure. More often decision is to keep the change. This is mandatory activity in upgrade and avoids data loses in new system.
    •     After completing SPDD transaction, we need to run SPAU Transaction to get the list of Standard SAP programs that have been modified.  This activity can be done in phases even after the upgrade. Generally this will be done in same go so that your testing results are consistent and have more confident in upgrade.
    •     Run SPUMG Transaction for Unicode Conversion in non-Unicode system. SPUM4 in 4.6c.
    •     Then we need to move Z/Y Objects.  Need to do Extended programming check, SQL trace, Unit testing, Integration testing, Final testing, Regression Testing, Acceptance Testing etc.,
    The main Category of Objects that needs to be Upgraded is –
    •     Includes
    •     Function Groups / Function Modules
    •     Programs / Reports
    •     OSS Notes
    •     SAP Repository Objects
    •     SAP Data Dictionary Objects
    •     Domains, Data Elements
    •     Tables, Structures and Views
    •     Module Pools, Sub Routine pools
    •     BDC Programs
    •     Print Programs
    •     SAP Scripts, Screens
    •     User Exits
    Also refer to the links -
    http://service.sap.com
    http://solutionbrowser.erp.sap.fmpmedia.com/
    http://help.sap.com/saphelp_nw2004s/helpdata/en/60/d6ba7bceda11d1953a0000e82de14a/content.htm
    http://www.id.unizh.ch/dl/sw/sap/upgrade/Master_Guide_Enh_Package_2005_1.pdf
    Hope this helps you.

  • Smilar query different behaviour

    Hi all
    I have a table with 2 indexes on 10g R2 on Windows 2003.
    ]one is composite primary key index (STOW.PK_SM350_TRANSACTION_AUDIT)
    columns (SM300_TRANSACTIONID, SM350_TRANSACTIONAUDITID)
    one is single index
    STOW.SM350_IDX1 column (SM300_TRANSACTIONID)
    first query is
    select count(*) from stow.sm350_transaction_audit where sm300_transactionid = '9B96428447C64BB682F2F004777F42B815933';
    second query is
    select * from stow.sm350_transaction_audit where sm300_transactionid = '9B96428447C64BB682F2F004777F42B815933';
    Where clause on two query is same and and queries return zero rows
    Problem is; first query runs with index range scan. Second query runs with full table scan.. The index which is used is single column non-unique index (still doesnt have any nulls on column).
    When I hint the second query to force to use single index it runs faster as expected.
    Indexe and table statistics are up-to-date (I tried to gather %10 and both %100 for index but the result were same)
    The 10053 trace output for the first query is below
    BASE STATISTICAL INFORMATION
    Table Stats::
    Table: SM350_TRANSACTION_AUDIT Alias: SM350_TRANSACTION_AUDIT
    #Rows: 24600584 #Blks: 699502 AvgRowLen: 185.00
    Index Stats::
    Index: PK_SM350_TRANSACTION_AUDIT Col#: 1 2
    LVLS: 3 #LB: 180135 #DK: 25777456 LB/K: 1.00 DB/K: 1.00 CLUF: 1800277.00
    Index: SM350_IDX1 Col#: 1
    LVLS: 3 #LB: 157875 #DK: 3779 LB/K: 41.00 DB/K: 194.00 CLUF: 733950.00
    SINGLE TABLE ACCESS PATH
    Column (#1): SM300_TRANSACTIONID(VARCHAR2)
    AvgLen: 34.00 NDV: 7754 Nulls: 0 Density: 1.8727e-004
    Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 207
    Table: SM350_TRANSACTION_AUDIT Alias: SM350_TRANSACTION_AUDIT
    Card: Original: 24600584 Rounded: 4648929 Computed: 4648929.26 Non Adjusted: 4648929.26
    Access Path: TableScan
    Cost: 153817.92 Resp: 153817.92 Degree: 0
    Cost_io: 153018.00 Cost_cpu: 9901578323
    Resp_io: 153018.00 Resp_cpu: 9901578323
    Access Path: index (index (FFS))
    Index: PK_SM350_TRANSACTION_AUDIT
    resc_io: 39406.00 resc_cpu: 5664988114
    ix_sel: 0.0000e+000 ix_sel_with_filters: 1
    Access Path: index (FFS)
    Cost: 39863.66 Resp: 39863.66 Degree: 1
    Cost_io: 39406.00 Cost_cpu: 5664988114
    Resp_io: 39406.00 Resp_cpu: 5664988114
    Access Path: index (index (FFS))
    Index: SM350_IDX1
    resc_io: 34537.00 resc_cpu: 5395455540
    ix_sel: 0.0000e+000 ix_sel_with_filters: 1
    Access Path: index (FFS)
    Cost: 34972.89 Resp: 34972.89 Degree: 1
    Cost_io: 34537.00 Cost_cpu: 5395455540
    Resp_io: 34537.00 Resp_cpu: 5395455540
    Access Path: index (skip-scan)
    SS sel: 0.18898 ANDV (#skips): 4871330
    SS io: 4871330.27 vs. index scan io: 34042.00
    Skip Scan rejected
    Access Path: index (IndexOnly)
    Index: PK_SM350_TRANSACTION_AUDIT
    resc_io: 34045.00 resc_cpu: 1216715625
    ix_sel: 0.18898 ix_sel_with_filters: 0.18898
    Cost: 34143.30 Resp: 34143.30 Degree: 1
    Access Path: index (AllEqRange)
    Index: SM350_IDX1
    resc_io: 29838.00 resc_cpu: 1162075527
    ix_sel: 0.18898 ix_sel_with_filters: 0.18898
    Cost: 29931.88 Resp: 29931.88 Degree: 1
    Best:: AccessPath: IndexRange Index: SM350_IDX1
    Cost: 29931.88 Degree: 1 Resp: 29931.88 Card: 4648929.26 Bytes: 0
    The 10053 trace output for the second query is below
    BASE STATISTICAL INFORMATION
    Table Stats::
    Table: SM350_TRANSACTION_AUDIT Alias: SM350_TRANSACTION_AUDIT
    #Rows: 24600584 #Blks: 699502 AvgRowLen: 185.00
    Index Stats::
    Index: PK_SM350_TRANSACTION_AUDIT Col#: 1 2
    LVLS: 3 #LB: 180135 #DK: 25777456 LB/K: 1.00 DB/K: 1.00 CLUF: 1800277.00
    Index: SM350_IDX1 Col#: 1
    LVLS: 3 #LB: 157875 #DK: 3779 LB/K: 41.00 DB/K: 194.00 CLUF: 733950.00
    SINGLE TABLE ACCESS PATH
    Column (#1): SM300_TRANSACTIONID(VARCHAR2)
    AvgLen: 34.00 NDV: 7754 Nulls: 0 Density: 1.8727e-004
    Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 207
    Table: SM350_TRANSACTION_AUDIT Alias: SM350_TRANSACTION_AUDIT
    Card: Original: 24600584 Rounded: 4648929 Computed: 4648929.26 Non Adjusted: 4648929.26
    Access Path: TableScan
    Cost: 153975.66 Resp: 153975.66 Degree: 0
    Cost_io: 153018.00 Cost_cpu: 11854128503
    Resp_io: 153018.00 Resp_cpu: 11854128503
    Access Path: index (skip-scan)
    SS sel: 0.18898 ANDV (#skips): 4871330
    SS io: 4871330.27 vs. index scan io: 34042.00
    Skip Scan rejected
    Access Path: index (RangeScan)
    Index: PK_SM350_TRANSACTION_AUDIT
    resc_io: 374255.00 resc_cpu: 6416159397
    ix_sel: 0.18898 ix_sel_with_filters: 0.18898
    Cost: 374773.35 Resp: 374773.35 Degree: 1
    Access Path: index (AllEqRange)
    Index: SM350_IDX1
    resc_io: 168538.00 resc_cpu: 4856139355
    ix_sel: 0.18898 ix_sel_with_filters: 0.18898
    Cost: 168930.32 Resp: 168930.32 Degree: 1
    Best:: AccessPath: TableScan
    Cost: 153975.66 Degree: 1 Resp: 153975.66 Card: 4648929.26 Bytes: 0
    Any idea about the wrong cost calculation for the second query???? Or can anyone explain me the truth If I thinking wrong ???

    Thank you for your comments Steven
    You are right. I have the power to know the data (I am DBA not Developer :) )
    These are the max min values and the searched value for sm300_transactionid
    min= 00020978E13B45AEA8556D8AF431CD15
    max= FFF617D95A2D4B34AB085FED512EB9E7
    whr= 9B96428447C64BB682F2F004777F42B815933
    Do you think this can cause the problem ? CBO knows that this column is not listed on column stats so it try the full table scan to find it.
    And if this is the problem can I make the assumption below
    My index is on not null column. I do a search on not null column with a value out of max min range. If CBO chooses the full tablescan for a value which is not on the table , Can I say that, in these cases CBO thinks that table is more reliable than index ????
    Message was edited by:
    coskan

  • Oracle 10g and 11g diff

    Dear Frineds
    previously i am usning 10g rt now i have installed 11g
    can u tell me the exact differance between 10g and 11g
    how can i find this practilcally,
    thanks

    Arbar Mehaboob - user553581 wrote:
    how can i find this practilcally,At http://tahiti.oracle.com are a few manuals.
    At the beginning of (almost) every manual is a "What's New" section that describes the practical differences for specific features from the previous version. There is also one that provides some high level information, called the "New Features Guide" that provides some high level information about newly introduced features, but doesn't dig much into details of changed features.
    So 'practically', you as DBA or Developer need to understand which features you are using and spend some time reading the manuals to understand what has changed.
    Since Oracle has literally thousands of individual features, and since you are responsible for a subset of those in production, it might not be a bad idea to become familiar with them. ;-)

  • Similarities and Differences between HTML DB an Portal

    Hi, I have several years of experience with Oracle Technology as DBA and Developer (Forms and Reports mainly). I'm excited and willing to familiarize with new technologies like Portal and HTML DB. I've been reading about both products and it seems they share similar principles in their arquitecture: both use dynamic page concepts, templates, and DB centric approach for building html apps.
    It seems like HTML DB is like some sort of "little brother" version of Portal: you don't need and application server and others licence purchase for deploying database centric web apps using HTML DB.
    So in short, can anyone portrait basic similarities and differences between these two products?.
    Thanks ...!

    "319071" and Doug:
    The products have different purposes. Portal provides content aggregation, application integration and personalization. Together with the Oracle Application Server infrastructure you also get identity management capabilities such as a centralized LDAP-based user directory and Single Sign-On. HTML DB is a tool for rapid development of database centric web applications. Nothing more, nothing less. Portal allows you to bring together applications built using a variety of technologies. HTML DB is just one of those technologies that happens to be optimized for web development on an Oracle database.
    Once you have built your application with HTML DB, you may choose to place a link to it on a portal page. You may even choose to make authentication seamless by making the HTML DB application a Single Sign-On partner application (see: http://www.oracle.com/technology/products/database/htmldb/howtos/sso_partner_app.html). Or, you may want to display some data from a report developed in HTML DB inside a portlet (see: http://www.oracle.com/technology/products/database/htmldb/howtos/omniportlet_index.html)
    Granted, Portal does have database centric development capabilities (what used to be called WebDB), but they are not nearly as flexible as HTML DB's. HTML DB lets you control page flow while maintaining session state, has quite sophisticated report building capabilities such as column based sorting and pagination, form building capabilities such as declarative field level data validations, built in field level help, etc.
    Sergio

Maybe you are looking for

  • Screen Classes in ABAP Objects.

    Hi everybody. I'm starting in newest SAP version and I need to find some information about Screen Classes. Could anybody help me? Thanks and best Regards-

  • How to track the corresponding CWIP and Asset after settlement in reports

    Dear gurus, We are trying to develop a single report which would have commitments, actual costs, Auc amounts and Final Asset postings w.r.t a project or WBS element. What is the table where we can find AuC amount and CWIP number for given WBS element

  • Windows 7 x64 bit and iTunes not recognizing iPhone 3GS

    Hello, I'm sorry to bring this up again and I know the community has seen it a million times but I'm definately needing someone to point me in the right direction. I have an Acer 5552 Laptop with Windows 7 x64 bit. It does not have the Intel P55 chip

  • AppleTV not working with Yamaha AV Receiver

    I have a Yamaha RX-V665 AV Receiver.  All of my other HDMI devices (a blu-ray player and connected macbook) work just fine through the receiver going to my Panasonic 65" display.  However the AppleTV 2 (purchased May 2012) does not. Any ideas? I've a

  • BAPI or fm for transaction FMDERIVER

    hi all is there any BAPI or fm which uploads derivation rules for tcode FMDERIVER? Edited by: PREMA PREMA on Oct 19, 2010 2:33 PM