Best practice on Oracle VM for Sparc System

Dear All,
I want to test Oracle VM for Sparc System but I don't have new model Server to test it. What is the best practice of Oracle VM for Sparc System?
I have a Dell laptop which has spec as below:
-Intel® CoreTM i7-2640M
(2.8GHz, 4MB cache)
- Ram: 8GB DDR3
- HDD: 750GB
-1GB AMD Radeon
I want to install Oracle VM VirtualBox on my laptop and then install Oracle VM for Sparc System in Virtual Box, is it possible?
Please kindly give advice,
Thanks and regards,
Heng

Heng Horn wrote:
How about computer desktop or computer workstation with the latest version has CPU supports Oracle VM or SPARC?Nope. The only place you find SPARC T4 processors is in Sun Servers (and some Fujitsu servers, I think).

Similar Messages

  • Best Practice Advice - Using ARD for Inventorying System Resources Info

    Hello All,
    I hope this is the place I can post a question like this. If not please direct me if there is another location for a topic of this nature.
    We are in the process of utilizing ARD reporting for all the Macs in our district (3500 +/- a few here and there). I am looking for advice and would like some best practices ideas for a project like this. ANY and ALL advice is welcome. Scheduling reports, utilizing a task server as opposed to the Admin workstation, etc. I figured I could always learn from those with experience rather than trying to reinvent the wheel. Thanks for your time.

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • Vdisks disappear in oracle vm for sparc 3.1

    hi ,
    i implement oracle rac on oracle vm for sparc 3 , and we have 2 guest domain running Solaris 11.1 , each guest domain in a separate SPARC T4-2 Server.
    the ASM disks are 32 disks mapped to the two server and then mapped to the guest domains.
    the problem here : when i added the disks in beginning it appeared 32 disks on each guest but after i restart the server and guest domain , sometimes the disks appears as 30 or 26 or 31 and sometimes 32. in other hand the disks appear 32 to control domain
    and here my configuration of control domain :
    root@control-domain # ldm list-services
    VCC
    NAME LDOM PORT-RANGE
    primary-vcc0 primary 5000-5100
    VSW
    NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    primary-vsw0 primary 00:14:4f:fb:11:72 net0 0 switch@0 phys-state 1 1 1500 on
    primary-vsw1 primary 00:14:4f:f8:7b:ef net1 1 switch@1 phys-state 1 1 1500 on
    primary-vsw2 primary 00:14:4f:fa:d1:09 net2 2 switch@2 1 1 1500 on
    primary-vsw3 primary 00:14:4f:fa:2a:ed net3 3 switch@3 1 1 1500 on
    VDS
    NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
    primary-vds0 primary vol1 /dev/rdsk/c0t000B080008004265d0s0
    primary_data_files1 primary data1_vol1 /dev/rdsk/c0t000B08000E004265d0s3
    data1_vol2 /dev/rdsk/c0t000B08000F004265d0s3
    data1_vol3 /dev/rdsk/c0t000B08001A004265d0s3
    data1_vol4 /dev/rdsk/c0t000B08001B004265d0s3
    data1_vol5 /dev/rdsk/c0t000B08001C004265d0s3
    primary_data_files2 primary data2_vol1 /dev/rdsk/c0t000B08001D004265d0s3
    data2_vol2 /dev/rdsk/c0t000B08001F004265d0s3
    data2_vol3 /dev/rdsk/c0t000B080014004265d0s3
    data2_vol4 /dev/rdsk/c0t000B080015004265d0s3
    data2_vol5 /dev/rdsk/c0t000B080016004265d0s3
    primary_data_files3 primary data3_vol1 /dev/rdsk/c0t000B080017004265d0s3
    data3_vol2 /dev/rdsk/c0t000B080018004265d0s3
    data3_vol3 /dev/rdsk/c0t000B080020004265d0s3
    data3_vol4 /dev/rdsk/c0t000B080021004265d0s3
    data3_vol5 /dev/rdsk/c0t000B080019004265d0s3
    primary_data_files4 primary data4_vol1 /dev/rdsk/c0t000B080010004265d0s3
    data4_vol2 /dev/rdsk/c0t000B080011004265d0s3
    data4_vol3 /dev/rdsk/c0t000B080012004265d0s3
    data4_vol4 /dev/rdsk/c0t000B080013004265d0s3
    data4_vol5 /dev/rdsk/c0t000B080022004265d0s3
    primary_online_logs primary online_logs_vol1 /dev/rdsk/c0t000B080024004265d0s3
    online_logs_vol2 /dev/rdsk/c0t000B080025004265d0s3
    primary_arch_logs primary archive_logs_vol1 /dev/rdsk/c0t000B080026004265d0s3
    archive_logs_vol2 /dev/rdsk/c0t000B080027004265d0s3
    archive_logs_vol3 /dev/rdsk/c0t000B080028004265d0s3
    archive_logs_vol4 /dev/rdsk/c0t000B080029004265d0s3
    archive_logs_vol5 /dev/rdsk/c0t000B080005004265d0s3
    archive_logs_vol6 /dev/rdsk/c0t000B080006004265d0s3
    primary_voting primary voting_vol1 /dev/rdsk/c0t000B080002004265d0s3
    voting_vol2 /dev/rdsk/c0t000B080003004265d0s3
    voting_vol3 /dev/rdsk/c0t000B080004004265d0s3
    primary_archive primary arch_vol1 /dev/rdsk/c0t000B08002A004265d0s3
    root@control-domain # ldm list-constraints
    DOMAIN
    primary
    UUID
    58925ecf-b822-6d35-d850-c713499ab385
    MAC
    00:10:e0:0e:b9:38
    HOSTID
    0x860eb938
    CONTROL
    failure-policy=ignore
    extended-mapin-space=off
    cpu-arch=native
    rc-add-policy=
    shutdown-group=0
    VCPU
    COUNT: 16
    MEMORY
    SIZE: 8G
    CONSTRAINT
    threading=max-throughput
    VARIABLES
    boot-device=/pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000cca03c436301,0:a disk net
    pm_boot_policy=disabled=1;ttfc=0;ttmr=0;
    IO
    DEVICE OPTIONS
    pci_0
    niu_0
    pci_1
    niu_1
    VCC
    NAME PORT-RANGE
    primary-vcc0 5000-5100
    VSW
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    primary-vsw0 net0 0 switch@0 phys-state 1 1 on
    primary-vsw1 net1 1 switch@1 phys-state 1 1 on
    primary-vsw2 net2 2 switch@2 1 1 on
    primary-vsw3 net3 3 switch@3 1 1 on
    VDS
    NAME VOLUME OPTIONS MPGROUP DEVICE
    primary-vds0 vol1 /dev/rdsk/c0t000B080008004265d0s0
    primary_data_files1 data1_vol1 /dev/rdsk/c0t000B08000E004265d0s3
    data1_vol2 /dev/rdsk/c0t000B08000F004265d0s3
    data1_vol3 /dev/rdsk/c0t000B08001A004265d0s3
    data1_vol4 /dev/rdsk/c0t000B08001B004265d0s3
    data1_vol5 /dev/rdsk/c0t000B08001C004265d0s3
    primary_data_files2 data2_vol1 /dev/rdsk/c0t000B08001D004265d0s3
    data2_vol2 /dev/rdsk/c0t000B08001F004265d0s3
    data2_vol3 /dev/rdsk/c0t000B080014004265d0s3
    data2_vol4 /dev/rdsk/c0t000B080015004265d0s3
    data2_vol5 /dev/rdsk/c0t000B080016004265d0s3
    primary_data_files3 data3_vol1 /dev/rdsk/c0t000B080017004265d0s3
    data3_vol2 /dev/rdsk/c0t000B080018004265d0s3
    data3_vol3 /dev/rdsk/c0t000B080020004265d0s3
    data3_vol4 /dev/rdsk/c0t000B080021004265d0s3
    data3_vol5 /dev/rdsk/c0t000B080019004265d0s3
    primary_data_files4 data4_vol1 /dev/rdsk/c0t000B080010004265d0s3
    data4_vol2 /dev/rdsk/c0t000B080011004265d0s3
    data4_vol3 /dev/rdsk/c0t000B080012004265d0s3
    data4_vol4 /dev/rdsk/c0t000B080013004265d0s3
    data4_vol5 /dev/rdsk/c0t000B080022004265d0s3
    primary_online_logs online_logs_vol1 /dev/rdsk/c0t000B080024004265d0s3
    online_logs_vol2 /dev/rdsk/c0t000B080025004265d0s3
    primary_arch_logs archive_logs_vol1 /dev/rdsk/c0t000B080026004265d0s3
    archive_logs_vol2 /dev/rdsk/c0t000B080027004265d0s3
    archive_logs_vol3 /dev/rdsk/c0t000B080028004265d0s3
    archive_logs_vol4 /dev/rdsk/c0t000B080029004265d0s3
    archive_logs_vol5 /dev/rdsk/c0t000B080005004265d0s3
    archive_logs_vol6 /dev/rdsk/c0t000B080006004265d0s3
    primary_voting voting_vol1 /dev/rdsk/c0t000B080002004265d0s3
    voting_vol2 /dev/rdsk/c0t000B080003004265d0s3
    voting_vol3 /dev/rdsk/c0t000B080004004265d0s3
    primary_archive arch_vol1 /dev/rdsk/c0t000B08002A004265d0s3
    DOMAIN
    dom1-node0
    UUID
    f6369ae2-152a-c65e-ecf9-c1840c895dac
    HOSTID
    CONTROL
    failure-policy=ignore
    extended-mapin-space=off
    cpu-arch=native
    rc-add-policy=
    shutdown-group=15
    VCPU
    COUNT: 32
    MEMORY
    SIZE: 48G
    CONSTRAINT
    threading=max-throughput
    VARIABLES
    pm_boot_policy=disabled=1;ttfc=0;ttmr=0;
    NETWORK
    NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
    vnet0 primary-vsw0 0 network@0 1 phys-state
    vnet1 primary-vsw1 1 network@1 1 phys-state
    vnet2 primary-vsw2 2 network@2 1
    vnet3 primary-vsw2 3 network@3 1
    DISK
    NAME VOLUME TOUT ID
    vdisk vol1@primary-vds0 0
    data1_disk1 data1_vol1@primary_data_files1 1
    data1_disk2 data1_vol2@primary_data_files1 2
    data1_disk3 data1_vol3@primary_data_files1 3
    data1_disk4 data1_vol4@primary_data_files1 4
    data1_disk5 data1_vol5@primary_data_files1 5
    data2_disk1 data2_vol1@primary_data_files2 6
    data2_disk2 data2_vol2@primary_data_files2 7
    data2_disk3 data2_vol3@primary_data_files2 8
    data2_disk4 data2_vol4@primary_data_files2 9
    data2_disk5 data2_vol5@primary_data_files2 10
    data3_disk1 data3_vol1@primary_data_files3 11
    data3_disk2 data3_vol2@primary_data_files3 12
    data3_disk3 data3_vol3@primary_data_files3 13
    data3_disk4 data3_vol4@primary_data_files3 14
    data3_disk5 data3_vol5@primary_data_files3 15
    data4_disk1 data4_vol1@primary_data_files4 16
    data4_disk2 data4_vol2@primary_data_files4 17
    data4_disk3 data4_vol3@primary_data_files4 18
    data4_disk4 data4_vol4@primary_data_files4 19
    data4_disk5 data4_vol5@primary_data_files4 20
    online_logs_disk1 online_logs_vol1@primary_online_logs 21
    online_logs_disk2 online_logs_vol2@primary_online_logs 22
    archive_logs_disk1 archive_logs_vol1@primary_arch_logs 23
    archive_logs_disk2 archive_logs_vol2@primary_arch_logs 24
    archive_logs_disk3 archive_logs_vol3@primary_arch_logs 25
    archive_logs_disk4 archive_logs_vol4@primary_arch_logs 26
    archive_logs_disk5 archive_logs_vol5@primary_arch_logs 27
    archive_logs_disk6 archive_logs_vol6@primary_arch_logs 28
    voting_disk1 voting_vol1@primary_voting 29
    voting_disk2 voting_vol2@primary_voting 30
    voting_disk3 voting_vol3@primary_voting 31
    arch_disk1 arch_vol1@primary_archive 32
    VCONS
    NAME SERVICE PORT LOGGING
    after login to guest domain :
    format
    0. c2d0 <SUN-DiskSlice-449GB cyl 65534 alt 2 hd 64 sec 225>
    /virtual-devices@100/channel-devices@200/disk@0
    1. c2d1 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@1
    2. c2d2 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@2
    3. c2d3 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@3
    4. c2d4 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@4
    5. c2d5 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@5
    6. c2d6 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@6
    7. c2d7 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@7
    8. c2d9 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@9
    9. c2d10 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@a
    10. c2d11 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@b
    11. c2d12 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@c
    12. c2d13 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@d
    13. c2d14 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@e
    14. c2d15 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@f
    15. c2d16 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@10
    16. c2d18 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@12
    17. c2d19 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@13
    18. c2d20 <SUN-DiskSlice-100GB cyl 25738 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@14
    19. c2d21 <SUN-DiskSlice-50GB cyl 12998 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@15
    20. c2d22 <SUN-DiskSlice-50GB cyl 12998 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@16
    21. c2d23 <SUN-DiskSlice-50GB cyl 12998 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@17
    22. c2d24 <SUN-DiskSlice-50GB cyl 12998 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@18
    23. c2d25 <SUN-DiskSlice-50GB cyl 12998 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@19
    24. c2d26 <SUN-DiskSlice-50GB cyl 12998 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@1a
    25. c2d27 <SUN-DiskSlice-50GB cyl 12998 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@1b
    26. c2d28 <SUN-DiskSlice-50GB cyl 12998 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@1c
    27. c2d29 <SUN-DiskSlice-10GB cyl 2623 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@1d
    28. c2d30 <SUN-DiskSlice-10GB cyl 2623 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@1e
    29. c2d31 <SUN-DiskSlice-10GB cyl 2623 alt 2 hd 64 sec 128>
    /virtual-devices@100/channel-devices@200/disk@1f
    30. c2d32 <Unknown-Unknown-0001-1.00TB>
    /virtual-devices@100/channel-devices@200/disk@20
    the missing disks c2d8 and c2d17 and so on it change frequently
    and also the following messsge appears on both nodes
    vdc: NOTICE: [4] Error initialising ports
    vdc: NOTICE: [12] Error initialising ports
    kindly help me to troubleshooting this problem .
    thanks in advance

    We had a similar issue in the past, per support we added the following entries to /etc/system on each LDOM and the issue has subsided for us:
    * Forceload module to prevent virtual disks from disappearing
    * during reboot, bug 15813779
    forceload: drv/vdc
    I hope this helps...

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • Oracle VM for SPARC

    Hi there,
    I am a Solaris enthusiast who recently purchased a second hand Sun Fire v210 with 2 CPU's and 4 GB of RAM.
    Now I am running Solaris 10 (sparc) on it and I want to toy around with virtual machines on it this is because I would like to run OS X and if possible x86 Linux inside a VM.
    Since I am just a hobbyist and i dont have the funds to purchase expensive software does anybody have any suggestions on what software to use and where to get it?
    With kind regards,
    Gertjan

    Gertjan wrote:
    Now I am running Solaris 10 (sparc) on it and I want to toy around with virtual machines on it this is because I would like to run OS X and if possible x86 Linux inside a VM.Unfortunately, this is not how SPARC/Solaris virtualization works. You can't run x86 software on the SPARC platform, so neither MacOS X nor Linux compiled for x86 would work. I'm also not even sure if the UltraSPARC processor in that v210 supports Oracle VM for SPARC (formerly LDOMs), but I'm not a Solaris expert. Essentially, if it is supported, you will be able to run Solaris guests on your SPARC, but that's pretty much it.

  • Installing Oracle Database with ASM on Oracle VM for SPARC

    We're installing Solaris 11 and Oracle VM for SPARC so we can install Oracle Database with ASM. There is a requirement when creating the database that the raw disk have an owner that is the same as the database. Everytime we try to change the owner it will always show that the owner is root.
    Any ideas?

    Hi
    Please let me know from where you are allocating ASM raw disks for the guest domain.
    i hope you are changing the disk permissions using chown -R
    Also confirm the permission using command # ls -IL /dev/rdsk
    Regards
    AB

  • Archiving Best Practices / How To Guide for Oracle 10g - need urgently

    Hi,
    I apologize if this is a silly question. But i need a step by step archiving guide for Oracle 10g and cannot find any reference document. I am in a rather remote part of S.E. Asia & can't seem to find DBA's with the requisite experience to do the job properly. I have had 1 database lock up this week at a big telecoms provider and another one at a major bank is about to go. I can easily add LUNS & re-structure mirrors etc at the Unix level [ i am a Unix engineer ]
    but i know that is not the long run solution. I am sure the 2 databases i am concerned about have never been archived properly.
    This is the sort of thing DBA's must do all the time. Can someone point me to the proper documentation so i can do a proper job and archive a few years data out of these databases. I do not want to do a hack job. At least i can clone the databases and practise on the clones first before i actually do production.
    -thanks very much
    -gregoire
    [email protected]

    I'm not so sure this is a general database question, as it would be specific to an application and implementation, and as the technology has changed, the database options to support it has too.
    So for example, if you have bought the partitioning option, there may be some sensible procedure for partitioning off older data.
    Things may depend on whether you are talking about an OLTP, a DW, a DSS, or mixed systems.
    DBA's do it all the time because the requirements are different everywhere. Simply deleting a lot of data after copying the old data to another table (as some older systems do) may just wind up giving you performance problems scanning swiss-cheesed data.
    Some places may not archive at all, if they've separated out OLTP from reporting. If all the OLTP stuff is accessed through indices, all the older stuff just sits there. The reporting DB may only have what is needed to be reported on, or be on a standby db where range scans are sufficient to ignore old data. There there's exadata, which has it's own strengths.
    Best Practices have to be on similar enough systems, otherwise they are a self-contradiction.
    Get yourself someone who understands your requirements and can evaluate the actual problem. No apology needed, it is not a silly question. But what is silly is assuming what the problem is with no evidence.

  • Best Practices or Project Template for Rep/Version

    I have installed the Repository 6i (3) and created the users successfully, even though it has taken a lot of effort to make sure each step is correct.
    However, on setting up the workareas and importing the project files, I have been trying back and force to figure out where things go, and who has what access.
    Is there something like a best practice or a project template for setting up a basic repository/version control system, that provides
    1. the repository structure,
    2. corresponding file system structure (for different developers, build manager, etc)
    3. access grants, and
    4. work scenarios, etc.
    The Technet demos and white papaers are either too high-level (basic), or too individual function oriented. I can't get a clear picture of the whole thing, since there are so many concepts and elements that don't easily go together.
    Considering that I am a decent DBA and developer, it has taken me 2 weeks, and I am still not ready to sign up other developers to use this thing. How do you expect any small development teams to ever use it? It's one thing to design it to be scalable and all-possible, it's another to make it easily usable. I have been suggested to use MS VSS. The only reason I am still trying Ora-Rep is its promise to directly support Designer and Oracle objects.

    Andy,
    I have worked extensively with the Repository over the last year and a half. I have collected some of my experiences and the derived guidelines on using the Repository in real life in a number of papers that I will be presenting at ODTUG 2001, next week in San Diego. If you happen to be there (see www.odtug.com), come and see me and we could talk through your specific situation. If you are not and you are interested in those papers, drop me an Email and I could send them to you (they probably will also become available on OTN right after the ODTUG conference).
    best regards,
    Lucas

  • JSP Best Practices and Oracle Report

    Hello,
    I am writing an application that obtains information from the user using a JSP/HTML form and then submitted to a database, the JSP page is setup using JSP Best Practices in which the SQL statments, database connectivity information, and most of the Java source code in a java bean/java class. I want to use Oracle Reports to call this bean, and generate a JSP page displaying the information the user requested from the database. Would you please offer me guidance for setting this up.
    Thank you,
    Michelle

    JSP Best Practices.
    More JSP Best Practices
    But the most important Best Practice has already been given in this thread: use JSP pages for presentation only.

  • Best practices when carry forward for audit adjustments

    Dear experts,
    I would like to know if someone can share his best practices when performing carry forward for audit adjustments.
    We are actually doing legal consolidation for one customer and we are facing one issue.
    The accounting team needs to pass audit adjustments around April-May for last year.
    So from January to April / May, the opening balance must be based on December closing of prior year.
    Then from May / June to December, the opening balance must be based on Audit closing of prior year.
    We originally planned to create two members for December period, XXXX.DEC and XXXX.AUD
    Once the accountants would know their audit closing balance, they would have to input it on the XXXX.AUD period and a business rule could compute the difference between the closing of AUD and DEC periods and store the result on an opening flow.
    The opening flow hierarchy would be as follow:
    F_OPETOT (Opening balance Total)
        F_OPE (Opening balance from December)
        F_OPEAUD (Opening balance from the difference between closing balance of Audit and December periods)
    Now, assume that we are in October, but for any reason, the accountant run a carry forward for February, he is going to impact the opening balance because at this time (October), we have the audit adjustments.
    How to avoid such a thing? What are the best practices in this case?
    I guess it is something that you may have encounter if you did a consolidation project.
    Any help will be greatly appreciated.
    Thanks
    Antoine Epinette

    Cookman and I have been arguing about this since the paleozoic era. Here's my logic for capturing everything.
    Less wear and tear on the tape and the deck.
    You've got everything on the system. Can't tell you how many times a client has said "I know that there was a better take." The only way to disabuse them of this notion is to look at every take. if it's not on the system, you've got to spend more time finding the tape, and adding "wear and tear on the tape and the deck." And then there's the moment where you need to replace the audio for one word from another take. You can quickly check all the other takes (particularly if you've done a thorough job logging the material - see below)_.
    Once it's on the system, you still need to log and learn the material. You can scan thru material much faster once it's captured. Jumping around the material is much easier.
    There's no question that logging the material before you capture makes you learn the material in a more thorough way, but with enough selfdiscipline, you can learn the material as thoroughly once it's been captured.

  • Storage Server 2012 best practices? Newbie to larger storage systems.

    I have many years managing and planning smaller Windows server environments, however, my non-profit has recently purchased
    two StoreEasy 1630 servers and we would like to set them up using best practices for networking and Windows storage technologies. The main goal is to build an infrastructure so we can provide SMB/CIFS services across our campus network to our 500+ end user
    workstations, taking into account redundancy, backup and room for growth. The following describes our environment and vision. Any thoughts / guidance / white papers / directions would be appreciated.
    Networking
    The server closets all have Cisco 1000T switching equipment. What type of networking is desired/required? Do we
    need switch-hardware based LACP or will the Windows 2012 nic-teaming options be sufficient across the 4 1000T ports on the Storeasy?
    NAS Enclosures
    There are 2 StoreEasy 1630 Windows Storage servers. One in Brooklyn and the other in Manhattan.
    Hard Disk Configuration
    Each of the StoreEasy servers has 14 3TB drives for a total RAW storage capacity of 42TB. By default the StoreEasy
    servers were configured with 2 RAID 6 arrays with 1 hot standby disk in the first bay. One RAID 6 array is made up of disks 2-8 and is presenting two logical drives to the storage server. There is a 99.99GB OS partition and a 13872.32GB NTFS D: drive.The second
    RAID 6 Array resides on Disks 9-14 and is partitioned as one 11177.83 NTFS drive.  
    Storage Pooling
    In our deployment we would like to build in room for growth by implementing storage pooling that can be later
    increased in size when we add additional disk enclosures to the rack. Do we want to create VHDX files on top of the logical NTFS drives? When physical disk enclosures, with disks, are added to the rack and present a logical drive to the OS, would we just create
    additional VHDX files on the expansion enclosures and add them to the storage pool? If we do use VHDX virtual disks, what size virtual hard disks should we make? Is there a max capacity? 64TB? Please let us know what the best approach for storage pooling will
    be for our environment.
    Windows Sharing
    We were thinking that we would create a single Share granting all users within the AD FullOrganization User group
    read/write permission. Then within this share we were thinking of using NTFS permissioning to create subfolders with different permissions for each departmental group and subgroup. Is this the correct approach or do you suggest a different approach?
    DFS
    In order to provide high availability and redundancy we would like to use DFS replication on shared folders to
    mirror storage01, located in our Brooklyn server closet and storage02, located in our Manhattan server closet. Presently there is a 10TB DFS replication limit in Windows 2012. Is this replicaiton limit per share, or total of all files under DFS. We have been
    informed that HP will provide an upgrade to 2012 R2 Storage Server when it becomes available. In the meanwhile, how should we designing our storage and replication strategy around the limits?
    Backup Strategy
    I read that Windows Server backup can only backup disks up to 2TB in size. We were thinking that we would like
    our 2 current StoreEasy servers to backup to each other (to an unreplicated portion of the disk space) nightly until we can purchase a third system for backup. What is the best approach for backup? Should we use Windows Server Backup to be capturing the data
    volumes?
    Should we use a third party backup software?

    Hi,
    Sorry for the delay in reply.
    I'll try to reply each of your questions. However for the first one, you may have a try to post to Network forum for further information, or contact your device provider (HP) to see if there is any recommendation.
    For Storage Pooling:
    From your description you would like to create VHDX on RAID6 disks for increasment. It is fine and as you said it is 64TB limited. See:
    Hyper-V Virtual Hard Disk Format Overview
    http://technet.microsoft.com/en-us/library/hh831446.aspx
    Another possiable solution is using Storage Space - new function in Windows Server 2012. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    It could add hard disks to a storage pool and creating virtual disks from the pool. You can add disks later to this pool and creating new virtual disks if needed. 
    For Windows Sharing
    Generally we will have different sharing folders later. Creating all shares in a root folder sounds good but actually we may not able to accomplish. So it actually depends on actual environment.
    For DFS replication limitation
    I assume the 10TB limitation comes from this link:
    http://blogs.technet.com/b/csstwplatform/archive/2009/10/20/what-is-dfs-maximum-size-limit.aspx
    I contacted DFSR department about the limitation. Actually DFS-R could replicate more data which do not have an exact limitation. As you can see the article is created in 2009. 
    For Backup
    As you said there is a backup limitation (2GB - single backup). So if it cannot meet your requirement you will need to find a third party solution.
    Backup limitation
    http://technet.microsoft.com/en-us/library/cc772523.aspx
    If you have any feedback on our support, please send to [email protected]

  • SAP Best Practices on assigning roles for Auditors

    Dear Gurus,
    We need to set up SAP roles for auditors in or system for SRM ECC & BI.
    Could you please suggest on wich roles should be granted to the auditors as best practice to follow on?
    I will really apprecciate your help.
    Best Regards,
    Valentino

    Hi Martin,
    Thanks for your interest. I would be very happy to work with folks like you to slowly improve such roles as we find improvement possibilities for them, and all benefit from the joint knowledge and cool features which go into them. I have been filing away at a set of them for years now - they are not evil but still usefull and I give them to an auditor without being concerned as long as they can tell me approximately what they have been tasked to look into.
    I then also show them the corresponding user menu of my role for these tasks and then leave them alone for a while... 
    Anyway... SAP told me that if we host the content on SDN for the collaboration and documentation to the changes in the files, then version management of the files can be hosted externally for downloading them (actually, SAP does not have an option because their software does not support it...).
    I will rather host them on my own site and add the link in the SDN wiki and a sticky forum post link to it than use a generic download service, at least to start with. Via change management to the wiki, we can easily map this to version management of the files on a monthly periodic update cycle once there are enough changes to the wiki.
    How about "Update Tuesday" as a maintenance cycle --> config updates each second Tuesday of the month... to remove authorizations to access backdoors which are more than "just display"...
    Cheers,
    Julius

  • Best practices or design framework for designing processes in OSB(11g)

    Hi all,
    We have been working in oracle 10g, now in the new project we are going to use Soa suite 11g.For 10g we designed our services very similar to AIA framework. But in 11g since OSB is introduced we are not able to exactly fit the AIA framework here because OSB has a structure different than ESB.
    Can anybody suggest best practices or some design framework for designing processes in OSB or 11g SOA Suite ?

    http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10223/04_osb.htm
    http://www.oracle.com/technology/products/integration/service-bus/index.html
    Regards,
    Anuj

  • Best Practice setting up NICs for Hyper V 2008 r2

    I am looking at some suggestions for best practice for setting up a hyper V 2008 r2 at a remote location with 5 nics, one for managment vlan and other 4 on the data vlan.  This server will host  2 virtual machines, one is a DC and the other
    is a member local DHCP server.  The server is setup now with one nic on the management Vlan and the other nic's set to get there ip from the local dhcp server on on the host.   We have the virtual networks setup in Hyper V to
    point to each of the nics using the "external connection".  The virtual servers 'DHCP and AD" have there own ip set within them.  Issues we are seeing,  when the site looses external connections for a while they cannot get ip
    addresses from the local dhcp server anymore.
    1. NIC on management Vlan -- IP Static -- Physical host
    2. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V  -- virtual server DHCP
    3. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- Virtual server domain controller
    4. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    5. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    Thanks in advance

    Looks like you may be over complicating things here.  More and more of the recommendations from Microsoft at this point would be to create a Logical Switch and then layer on Logical Networks for your management layers, but here is what I would do for
    you simple remote office.  
    Management NIC:  Looks good (Teaming would be better, but only if you had 2 different switching to protect against link failures at the switch level.  Doesn't seem relevant in this case however.
    NIC for Data Network VLAN:  I would use one NIC in your case if you can have the ability to Trunk multiple VLANs at the switch level to the NIC.  That way you are setting the VLAN on the VMs NIC that you want to access and your
    Virtual Switch configuration is very simple.  On this virtual switch however, I would uncheck IPv4 and IPv6.  There is no need to give this NIC an address as you are just passing traffic through them from the VMs that are marked with VLAN tags.  Again,
    if you have multiple physical switches in the building teaming could be an option, but probably adds more complexity than is necessary for a small office. 
    Even if you keep your Virtual Switches linked to separate NICs unchecking IPv4 and IPv6 makes sense. 
    Disable all the other NICs
    Beyond that, check your routing.  Can you ping between all hosts when there is not interruption? What DHCP server are they getting there addresses on normally?  Where are your name resolution servers (DNS, WINS)?  
    No silver bullet here, but maybe a step in the right direction.
    Rob McShinsky (VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

Maybe you are looking for

  • Making an image in a pic viewer link to an outside site

    I am very new to Flash and have purchased a great set of tutorials from LearnFlash.com. I am currently taking what was taught in the "Building Websites in Flash 8" and tweaking my own site using the ideas. I have a portfolio page that has thumbnails

  • Dataprovider: javax.faces.el.PropertyNotFoundException

    Hi!!! When I link a textField with a dataprovider field, it works well. OK. After I close the IDE, studio creator, and open again the IDE. When the IDE shows me the textField it tells me this: javax.faces.el.PropertyNotFoundException: javax.faces.el.

  • Organization Management - Function module

    Hi, I am looking for function module to update the Sales organization data in Business partner transaction for the role Sold-to-party. In BP transaction, for a business partner in role sold-to-party, we assign sales organization. I would like to know

  • Add Voice Call features

    Hey, I indeed have to say that Windows 10 seems to become one of the best OS you released ever. But one suggestion. As you said, Windows 10 will be the same base at any Windows device. Nice step indeed. But one thing should be added to Windows 10 for

  • Very weak WLAN signal on Satellite Pro L10

    Hi, I have a Sat Pro L10 that came without a mini pci wireless card so I bought and installed one and fitted internal antenna behind the screen. When I am in the same room as the router I have a brilliant connection but as soon as I leave the room I