Physical implementation.

Hi friends,
I have Oracle 7.3 and 9i databases on HP UX 11.now I want to deploy 9iAS J2EE and web Cache on same server from remote location connected with T1, which can talk to both of this databases.Should i need specefic user,with specific privs other then dba.i need seperate Oracle_home, Displey, TMP, TNS_Admin and set up of HTTP and J2EE, I do want to install and integrate struts. I will appriciate anyone's comments on this subject.
Regards

Hi,
the easiest way is to use name substitution functionality in DDL generation:
1) "apply name substitution" check box - select it
2) 'object types" on "Name substitution" tab - select "user'
3) define substitution -n example :
old: DEV
new: PROD
select definitions you want to use in current DDL generation session
Philip

Similar Messages

  • Model Physical Implementation ?

    Hi ,
    I am using SQL DM 3.0.0.665. I need your thoughs on following.
    We have a logical model and physical model on SQL Developer.
    We need to push the same physical model into multiple schemas/instances.
    What is the best way to do that?
    Any idea?
    Thanks in helping us out.

    Hi,
    the easiest way is to use name substitution functionality in DDL generation:
    1) "apply name substitution" check box - select it
    2) 'object types" on "Name substitution" tab - select "user'
    3) define substitution -n example :
    old: DEV
    new: PROD
    select definitions you want to use in current DDL generation session
    Philip

  • Logical Database design and physical database implementation

    Hi
    I am an ORACLE DBA basically and we started a proactive server dashboard portal ,which basically reports all aspects of our infrastructure (Dev,QA and Prod,performance,capacity,number of servers,No of CPU,decomissioned date,OS level,Database patch level) etc..
    This has to be done entirely by our DBA team as this is not externally funded project.Now i was asked to do " Logical Database design and physical Database
    implementation"
    Even though i know roughly what's that mean(like designing whole set of tables in star schema format) ,i have never done this before.
    In my mind i have a rough set of tables that can be used but again i think there is lot of engineering involved in this area to make sure that we do it properly.
    I am wondering you guys might be having some recommendations for me in the sense where to start?are there any documents online , are there any book on this topic?Are there any documents which explain this phenomena with examples ?
    Also exactly what is the difference between logical database design vs physical database implementation
    Thanks and Regards

    Logical database design is the process of taking a business or conceptual data model (often described in the form of an Entity-Relationship Diagram) and transforming that into a logical representation of that model using the specific semantics of the database management system. In the case of an RDBMS such as Oracle, this representation would be in the form of definitions of relational tables, primary, unique and foreign key constraints and the appropriate column data types supported by the RDBMS.
    Physical database implementation is the process of taking the logical database design and translating that into the actual DDL statements supported by the target RDBMS that will create the database objects in a target RDBMS database. This will generally include specific physical implementation details such as the specification of tablespaces, use of specialised indexing (bitmap, clustered etc), partitioning, compression and anything else that relates to how data will actually be physically stored inside the database.
    It sounds like you already have a physical implementation? If so, you can reverse engineer this implementation into a design tool such as SQL Developer Data Modeller. This will create a logical design by examining the contents of the Oracle data dictionary. Even if you don't have an existing database, Data Modeller is a good tool to use as a starting point for logical and even conceptual/business models.
    If you want to read anything about logical design, "An Introduction to Database Systems" by Date is always a good starting point. "Database Systems - A Practical Approach to Design, Implementation and Management" by Connolly & Begg is also an excellent reference.

  • Compare physical db with CASE model

    I'm using Designer Release 6.0 for PC. I'd like to compare (diff) my Designer db server model with my physical implementation
    residing on our server. We sometimes have developers change the physical db without changing the CASE tool (ERD, tables, etc.). I
    remember an old version of the CASE tool that had a report available that performed this compare. You run the report, login to your
    physical db, then it would compare the CASE model with the physical model and give you an output of the differences. It would report
    any column size/type changes, constraint changes, index changes - everything. I don't remember the name of the report but
    I can't find a report title in this CASE version that sounds like this. Anyone know of such a report?
    Thanks for any help,
    Kevin Sharpe

    Hi,
    You can run the Reconcile Report when you run the Server model generator. In the Design editor select the Relational Table Definition node and open the Server Model Generator. You must choose to generate to the livedatabase rather than generate the DDL files.
    The generator does not automatically execute the DDL it creates. Once the generator has created the DDL it also creates the Reconcile Report and brings up a dialog allowing you to view the DDL, execute the DDL, examine the report or cancel. The report comapres the physical database and the table definitions.
    If you examine the DDL you'll see that the generator creates ALTER statements for any tables it finds in the database.
    Hope this helps
    Rgds
    Susan
    Oracle Designer Product Management

  • Attributes For SQL Server Physical Models Aren't Saved

    In version 3.0.0.665, a physical implementation implementation of data model was persisted to SQL Server 2005. The physical implementation required the use of IDENTITIES; the equivalence to an Oracle sequence.
    The functionality works fine because the DDL generated correctly and the script was persisted to the target database without incidence. Both the physical model and DMD were saved. After deployment, the client required the addition of several attributes into the model. Since the model hadn't populated yet, the desired approach involved dropping the SQL SERVER database (Oracle Schema), applying the changes to the data model, generating the DDL, and creating the SQL SERVER database.
    Upon inspection of the generated DDL, the IDENTITY directives were absence. Further research discovered the attributes were absence from the property sheets attached to the various physical tables. Further testing uncovered that these attributes only persist throughout the lifetime of the data modeling session.
    I have not had this experience with Oracle physical models e.g.(creating trigger, sequences, etc). Most likely this is a defect in the software, but I was curious to know, if anyone else as run across this opportunity.
    Warm regards,
    Greg

    In version 3.0.0.665, a physical implementation implementation of data model was persisted to SQL Server 2005. The physical implementation required the use of IDENTITIES; the equivalence to an Oracle sequence.
    The functionality works fine because the DDL generated correctly and the script was persisted to the target database without incidence. Both the physical model and DMD were saved. After deployment, the client required the addition of several attributes into the model. Since the model hadn't populated yet, the desired approach involved dropping the SQL SERVER database (Oracle Schema), applying the changes to the data model, generating the DDL, and creating the SQL SERVER database.
    Upon inspection of the generated DDL, the IDENTITY directives were absence. Further research discovered the attributes were absence from the property sheets attached to the various physical tables. Further testing uncovered that these attributes only persist throughout the lifetime of the data modeling session.
    I have not had this experience with Oracle physical models e.g.(creating trigger, sequences, etc). Most likely this is a defect in the software, but I was curious to know, if anyone else as run across this opportunity.
    Warm regards,
    Greg

  • Physical schema vs logical schema in odi

    hi, i am new to odi.I have successfully loaded data metadata to essbase and planning . But still i am not clear why odi uses physical schema when we just uses logical schema while reversing, execution of interfaces etc in designer

    Hi,
    Logical schema will always point to a Physical Schema .
    The aim of the logical schema is to ensure the portability of the procedures and models on the different physical schemas. In this way, all developments in Designer are carried out exclusively on logical schemas.
    A logical schema can have one or more physical implementations on separate physical schemas, but they must be based on data servers of the same technology. A logical schema is always directly linked to a technology.
    To be usable, a logical schema must be declared in a context. Declaring a logical schema in a context consists of indicating which physical schema corresponds to the alias - logical schema - for this context.
    Thanks,
    Sutirtha

  • Designer 9i Printing Physical Schema

    Hi,
    How can I force Desinger to print a diagram on one page?
    Thank you

    Hi,
    Logical schema will always point to a Physical Schema .
    The aim of the logical schema is to ensure the portability of the procedures and models on the different physical schemas. In this way, all developments in Designer are carried out exclusively on logical schemas.
    A logical schema can have one or more physical implementations on separate physical schemas, but they must be based on data servers of the same technology. A logical schema is always directly linked to a technology.
    To be usable, a logical schema must be declared in a context. Declaring a logical schema in a context consists of indicating which physical schema corresponds to the alias - logical schema - for this context.
    Thanks,
    Sutirtha

  • Physical Server Clustered Migration to Virtual Machine

    Dear All,
    We got an request from SQL server running on physical server migrate to virtual machine.
    Please advice me the procedure and checklist for this migration request.
    Thanks.

    Hi mailatdinesh,
    If you want to run SQL Server in a virtual machine (VM), here are some of the essential tips for virtualizing SQL Server.
    1. To support bare metal virtualization platforms, you need to make sure that the host uses a 64-bit x64 processor; In addition, for a resource-intensive workload like SQL Server, it's vital that the host processor supports Second Level Address Translation.
    2. SQL Server can automatically take advantage of multiple CPUs if they're present. If you're migrating from a physical SQL Server implementation to a virtualized SQL Server implementation, you can use the number of CPUs in the physical implementation as
    a guide for the number of virtual CPUs that you'll configure in the SQL Server VM.
    3. In order for SQL Server to take advantage of dynamic memory, I recommend you to use the Enterprise edition of SQL Server 2008 or later or using the Datacenter edition of SQL Server 2008 R2 or SQL Server 2008.
    For more information , see:
    http://sqlmag.com/sql-server/sql-server-virtualization-tips
    When you migrate one instance from one server to another, as other post, you can use backup and restore, the sp_detach_db and sp_attach_db stored procedures, or 
    the Import and Export Data Wizard to copy objects and data between SQL Server databases, you also need to transfer logins and passwords between instances of SQL Server, move jobs, alerts and operators. For more information, you can refer to the following
    article,
    http://support.microsoft.com/kb/314546
    There is also similar issue about moving SQL Server to VM, you can refer to the following post.
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/43085f0f-4526-4994-be95-530aecb5dd7c/moving-sql-server-to-vm
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Coherent vs. Non-Coherent Implementation

    This is a follow up to this post.
    Two questions here.
    First...Consider this situation.  With the generator off, I turn on my analyzer.  It begins acquiring IQ samples, though they have no meaning since I'm not sending any data.  I then go and turn on the generator, which starts spitting out multiple IQ samples for every symbol.  Question:  How does the receiver figure out when I started to send "real" data?  It's a question similar to this post.  An NI rep said that file transfer had been accomplished when the Tx/Rx were in the same chassis, but a wireless (i.e. two independent PXI chassis) implementation had not been done (by NI or the poster).  Is there something being passed in hardware?  Or is there some VI that the receiver uses to determine when "valid" IQ samples start to flow.  A similar question could be asked when you turn the generator on first, and the analyzer on sometime in the middle of a data stream.  The short question: How does the receiver know in a stream of IQ samples where one symbol begins and another ends?
    Second... A related question.  What's going on in generation (and/or in demod.) to determine a coherent scheme and a non-coherent scheme?  Is there a PLL in the receiver chain?  Are the internal clocks synched (or not synched in the case of non-coherent) via the backplane?  (Note:  I understand the differences between coherent and non-coherent, rather my question is to how this is physically implemented in the NI hardware/software toolkit.)
    Thanks!
    Brandon

    Hello Brandon,
    This is a good question. Basically this is a similar situation to an oscilloscope question where you can be acquiring time domain data with a scope of noise when a signal is suddenly applied to the input. How does the scope know when the signal being input is now 'meaningful'?
    The answer is via triggering or some form of syncronization. If the IQ stream into the analyzer is 'off' and a signal then is applied, one can trigger of this rising edge of 'magnitude' data. The IQ data can be interepreted as mag/phase data in polar coordinates. When the signal is off, these IQ samples will have magnitude data close to 0. When the signal is applied, the magnitude data will have a rising edge. This is the rising edge which can be configured as the trigger point for the PXI-5661 in the NI-RFSA driver using the IQ Power Edge trigger type. This is useful for capturing cursted RF signals, or RF signals which may be off and become 'on' at an unknown point in time in the future.
    If there is no change in signal magnitude which can be used as a reference, the other option is to create some syncronization between the generator and receiver, where the generator sends out a digital pulse which can be used as a digital trigger for the analyzer.
    As for determining which IQ samples acquired are symbols, this relates to the concept of oversampling the IQ data to allow pulse shaping. If bits were mapped to IQ constellation points and no oversampling / interpolation performed top increase the sampling rate of this IQ data, it could not be pulse shaped effectively. Since you end up with some IQ data rate that is some multiple of the symbol rate, the question arises as to how to know which IQ samples are the symbols that need to be mapped back to bits.
    This process is done in the NI RF platform in the Modulation Toolkit demodulation algorithms. The algorithms have a process they perform which is known as symbol clock recovery. After the IQ data is resampled to an integer number of samples per symbol, each sample is checked for its 'closeness' to ideal symbol locations specified in the symbol map. If one sample is very close to an ideal symbol location, we are using 8 samples per symbol, and the sample that is 8 samples after the one that is close is also close to an ideal symbol location, then there is a high probability that these samples are the samples that occurr at the symbol clock period and should be mapped to bits.
    In any case, this is an issue for the demodulation software and should not impact you, as it is already being done.
    As for phase coherency in the NI RF hardware, the PXI-5600 and PXI-5610 down and upconverter modules each generate their own LO signals. As they do not share commone LO signals, they cannot be phase syncronized and are not phase coherent. The PXI-5600 and PXI-5610 (and thus the PXI-5660/5661 and PXI-5670/5671) can be frequency locked via a 10 MHz reference, but they will not have 0 degrees phase offset.
    Regards,
    Andy Hinde
    National Instruments

  • Java-pointers

    difference between null and void pointers.

    In the physical implementation I cannot tell you the difference, but programatically; void mean no return value and null is no object assigned to the variable.

  • Problem with sessions in Kate Editor

    Hey guys!
    I'm using Kate Editor to code and i'm having problems with sessions. If kate is open and I logout KDE, when I come back to KDE all my customizations in Kate's session (activated plugins, font size, etc) are lost.
    If I manually close Kate before logout from KDE, all the customizations are kept when a manually start Kate. I tried a lot of workarounds, but none worked.
    Is this a bug? Someone else with this issue?
    Thanks in advance!

    The Warning errors are simply because you don't have the tablespaces, users, and roles defined in your application system under the DB Admin tab. Unless it is important to you to capture the physical implementation of your tables exactly as well as the table definitions, you can safely ignore these. If the physical implementation IS important to you, then you need to create these tablespaces, roles and users under the database that you created under the DB Admin tab before you start the capture.
    The Error is because in the set of objects you are capturing there is a foreign key that references the table named "PLEASANT". This table must be among the objects that you are capturing, or must already be in a Table Definition in your application system in the repository.

  • How to make BI Info obj Data element in-herit source DE documntn?

    Say I am in ECC. I go to SE11, give table name MARA and then doubleclick on the dataelement 'MATNR'. Then I click on documentation. I get a popup with Short text " Material Number" and Definition "Alphanumeric key uniquely identifying the material'.
    I am interested in the latter information that is 'Definition' - & in general whatever documentation comes up for data element in ECC.CRM.SCM,SRM.
    Now I log into SAP BI. I find that under characteristic 0Material, the data element is /BI0/oimaterial. When I double-click this datamaterial and press the documentation button, the system says 'No documentation'.
    My Questions:
    1. IS there a switch we could turn on in source ECC/SRM/CRM/SRM so that the data element in SAP BI inherits the original source field data element documentation.
    { I am not too convinced of the argument- that in BI we have info objects in ECC we have fields - since I am talking of data element level information- I would tend to think of this as an oversight of the designers or de-prioritization !!}.
    2. Could we have an ABAP workaround? That is, in BI we identify the tables that house the mapping between the source and destination data elementsa and take out this information. Then we extract the dataelement documentation by function DOCU_GET (from eCC) and use the mapping info above to link SAP BI data element with source data element documentation.
    WHY do I want to punish myself as above? My use case is, we take out SAP BI Table, field, metadata etc and create a model in a modeling tool and physical implementation in a 3rd party DW database as our own canonical, application-agnostic corporate datawarehouse. I would want the source data element documentation to flow to this last system as well as the modeling tool.
    Regards
    Sasanka

    That is, in BI we identify the tables that house the mapping between the source and destination data elementsa and take out this information. Then we extract the dataelement documentation by function DOCU_GET (from eCC) and use the mapping info above to link SAP BI data element with source data element documentation.
    1) SAP don't supply this, I would imagine, because R/3 isn't the only possible source of data.  I'm currently working on an app that extracts from R/3, an Oracle database and APO.  From whence should I take the documentation for my MATERIAL info object?  While being able to transfer the documentation of data elements might be very useful for your app, I can't see that generally it would be of interest to clients - I've certainly never heard of such a requirement.  So, my opinion at least, it isn't a design flaw.
    2) As you've pointed out, you can get the tables that do the mapping, so you know the source data elements, so you can get the documentation.  I'm not sure of how to store the documentation, but the obvious candidate for a link between infoobject and dataelement would be master data of an own infoobject.  You could wrap DOCU_GET in an RFC (if it isn't RFC enabled), and do a direct call between your 3rd party app and r/3 to get the documentation.  For information about the mapping tables, I'd suggest asking that question specifically in one of the BI forums.
    matt

  • How to delete the data from partition table

    Hi all,
    Am very new to partition concepts in oracle..
    here my question is how to delete the data from partition table.
    is the below query will work ?
    delete from table1 partition (P_2008_1212)
    we have define range partition ...
    or help me how to delete the data from partition table.
    Thanks
    Sree

    874823 wrote:
    delete from table1 partition (P_2008_1212)This approach is wrong - as Andre pointed, this is not how partition tables should be used.
    Oracle supports different structures for data and indexes. A table can be a hash table or index organised table. It can have B+tree index. It can have bitmap indexes. It can be partitioned. Etc.
    How the table implements its structure is a physical design consideration.
    Application code should only deal with the logical data structure. How that data structure is physically implemented has no bearing on application. Does your application need to know what the indexes are and the names of the indexes,in order to use a table? Obviously not. So why then does your application need to know that the table is partitioned?
    When your application code starts referring directly to physical partitions, it needs to know HOW the table is partitioned. It needs to know WHAT partitions to use. It needs to know the names of the partitions. Etc.
    And why? All this means is increased complexity in application code as this code now needs to know and understand the physical data structure. This app code is now more complex, has more moving parts, will have more bugs, and will be more complex to maintain.
    Oracle can take an app SQL and it can determine (based on the predicates of the SQL), which partitions to use and not use for executing that SQL. All done totally transparently. The app does not need to know that the table is even partitioned.
    This is a crucial concept to understand and get right.

  • How to delete the articles from MVKE table.

    hi all,
    i want to remove articles from MVKE table. but at that time artciles articles are not removal from other table.
    how to set deletion flag on sales organisation level.
    we want to set deletion flag on sales organisation level. but at that time i do not want to set deletion flag on site leve.
    thanx in advance

    874823 wrote:
    delete from table1 partition (P_2008_1212)This approach is wrong - as Andre pointed, this is not how partition tables should be used.
    Oracle supports different structures for data and indexes. A table can be a hash table or index organised table. It can have B+tree index. It can have bitmap indexes. It can be partitioned. Etc.
    How the table implements its structure is a physical design consideration.
    Application code should only deal with the logical data structure. How that data structure is physically implemented has no bearing on application. Does your application need to know what the indexes are and the names of the indexes,in order to use a table? Obviously not. So why then does your application need to know that the table is partitioned?
    When your application code starts referring directly to physical partitions, it needs to know HOW the table is partitioned. It needs to know WHAT partitions to use. It needs to know the names of the partitions. Etc.
    And why? All this means is increased complexity in application code as this code now needs to know and understand the physical data structure. This app code is now more complex, has more moving parts, will have more bugs, and will be more complex to maintain.
    Oracle can take an app SQL and it can determine (based on the predicates of the SQL), which partitions to use and not use for executing that SQL. All done totally transparently. The app does not need to know that the table is even partitioned.
    This is a crucial concept to understand and get right.

  • How can we change the column sequence from 1st to 7th position

    Hi
    I have a table in which I have a column naming DNAME and it is is on the first position I want to change its position to 7 that is the last one. The table is huge filled with data.
    Is there ant way in which I can cange the position of the column with out recreating in into another table and then renaming it.
    Thanks

    Hello,
    The sequence of a column in a table is meaningless and you should treat it as such. The column ordering you see when you run a "select * from ..." query is based on the physical implementation of the table, which is completely separate from the logical interpretation, and should not be relied on in a programmatic context. It should essentially be viewed as "random" and prone to change with the physical storage properties of the table (although this is not always the case, there is no assurance of this).
    One way to do this (which happens to be the easiest and least resource-intensive in this case) is to create a view (derived relation) of the base table, with the columns ordered in the specific sequence you desire. But really, the column ordering here is still meaningless, and the only time your application should rely on the order is when you specifically list the columns you require in a query -- but NEVER a "select *".
    cheers,
    Anthony

Maybe you are looking for

  • Linkage Error while setting up webApp

              Hi,           In my webApp I bumped into following Exception while running JSPs which queries           database.           Platform : Windows NT 4, webLogic 5.1 with SP 8.           Action : WLSServer crashes with Dr. Watson application er

  • DNS problem when calling a web service

    Hi, when I try to call a web service from the Studio I get the following error message: return code: 503 Network Error Network Error (dns_server_failure) Your request could not be processed because an error occurred contacting the DNS server. The DNS

  • Synchronizing Material Master revision level across multiple plants

    Hello, If an organization uses material master revision level to maintain the details of material's validity across different plants then we see a material master with different revisions in that organization. Now I  would like to understand the impa

  • HT4059 How can I see the price of an ibook I recently downloaded?

    I recently downloaded an ibook and I forgot how much I paid for it. It just tells me that the book is downloaded and I see no indication of the price in the description.

  • Tools and spares used for machine maintenance -table and fields -reg

    Hi, for a machine while attending breakdowns we use some spares and tools which table of SAP stores this data like for the equippment - this notification /order ........ which spares/ tools are used ...........cost center to which these are issued re