Datafile sizing best practice ??

Hi,
I have a Tablespace of size 82 GB with 3 datafiles. The datafile config is given below.
'prod01.dbf' SIZE 33554112K AUTOEXTEND ON NEXT 8K MAXSIZE UNLIMITED'
'prod02.dbf' SIZE 33554416K AUTOEXTEND ON NEXT 20M MAXSIZE UNLIMITED'
'prod03.dbf' SIZE 18064M AUTOEXTEND ON NEXT 20M MAXSIZE UNLIMITED'
My server config is given below.
HP AMD 4xCPU 6 cores blade server
Sun Solaris 10 - 64 bit
10g R2 10.2.0.4 64 bit
Native local HDD 2x300GB having RAID 1
What is the best practice for sizing the above datafiles to attain maximum performance?
Regards,
Ashok Kumar.G

I have a Tablespace of size 82 GB with 3 datafiles. The datafile config is given below.
'prod01.dbf' SIZE 33554112K AUTOEXTEND ON NEXT 8K MAXSIZE UNLIMITED'
'prod02.dbf' SIZE 33554416K AUTOEXTEND ON NEXT 20M MAXSIZE UNLIMITED'
'prod03.dbf' SIZE 18064M AUTOEXTEND ON NEXT 20M MAXSIZE UNLIMITED'You can have different extents on each data file, But it depends on RAID level performance too.
HP AMD 4xCPU 6 cores blade server
Sun Solaris 10 - 64 bit
10g R2 10.2.0.4 64 bit
Native local HDD 2x300GB having RAID 1
What is the best practice for sizing the above datafiles to attain maximum performance?check this note
*I/O Tuning with Different RAID Configurations [ID 30286.1]*
http://www.dba-oracle.com/t_datafile_management.htm

Similar Messages

  • Font sizing best practice in 2014

    Decided to obtain a better working knowledge of font sizing best practice, read until my eyes bled, and find I continue to have questions. So much posted is old (older than few years) and change happens fast.
    To the point; Since use of em units (or percentages) is often preferred, I now see that rems (root em) is another option, which leads me to ask:
    Should I consider using “rem” sizing?… or, do browser zooming capabilities of modern browsers mean I do not have to obsess over how precise I must be in guesstimating how viewers actually see my written content as they have better control at their end? Is em compounding of sizes something that must be considered?
    Thanks-

    Use a mixture of what is best suited.
    For document level I use pixels as in
    /* Document level adjustments */
    html {
      font-size: 13px;
    @media (min-width: 760px) {
      html { font-size: 15px; }
    @media (min-width: 900px) {
      html { font-size: 17px; }
    Then for the modules I use root level ems as in
    /* Modules will scale with document */
    header {
      font-size: 1.5rem;
    footer {
      font-size: 0.75rem;
    aside {
      font-size: 0.85rem;
    Then the size that will scale with the modules
    /* Type will scale with modules */
    h1 {
      font-size: 3em;
    h2 {
      font-size: 2.5em;
    h3 {
      font-size: 2em;
    Using this method I keep each scenario under complete control.
    A List Apart has a nice article on the subject.

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • Best Practices To Size the Datafile on ASM in Oracle 11gR2

    Hi DBAs,
    I need to know that what should be the datafile next extent on ASM. I have created a database 11gR2 with ASM. Now some certified DBA told me to create the datafile as small as possible and make it auto extend with the next extent size 1M. The expected data growth every month is about 20-25 Gig.
    Please let me know what are the best practices while for sizing the datafiles and extents in 11g using ASM. Any good document/note refer would be great help.
    Thanks
    -Samar-

    Hi Samar,
    I need to know that what should be the datafile next extent on ASM. I have created a database 11gR2 with ASM. >Now some certified DBA told me to create the datafile as small as possible and make it auto extend with the next >extent size 1M. The expected data growth every month is about 20-25 Gig.
    Please let me know what are the best practices while for sizing the datafiles and extents in 11g using ASM. Any >good document/note refer would be great help.I don't think there is any kind of recomendations for that, at least, I've never seen. As You probably know, ASM already divides the ASM files (any sort of files written in a ASM diskgroup) in extents and those extents are composed of AU. So, doesn't matter the datafile size, ASM will balace the I/O on all ASM disks in a diskgroup.
    I guess this thread ( AU_SIZE and Variable Extent Size ), even not being the subject of Your question, Will help You.
    I do believe there are more important factors for ASM performance, like stripe size, LUN size, storage configuration, etc.
    Hope it helps,
    Cerreia

  • Is hard-coded subview sizing brittle or best practice?

    Coming from a web background, where explicit sizing is generally not considered best practice, I am not used to 'hardcoding' positioning values.
    For example, is it perfectly acceptable to create custom UITableViewCells with hardcoded values for subviews' frames and bounds? It seems that when the user rotates the device, I'd need another set of hardcoded frames/bounds? If Apple decides to release a larger TABLET ... then I'd need yet another update to the code - it sounds strange to me in that it doesn't seem to scale well.
    In my specific case, I'd like to use a cell of style UITableViewCellStyleValue2 but I need a UITextField in place of the detailTextLabel. I hate to completely write my own UITableViewCell but it does appear that in most text input examples I've found, folks are creating their own cells and hardcoded sizing values.
    If that is the case, is there any way I can adhere to the predefined positioning and sizing of the aforementioned style's textLabel and detailTextLabel (ie: I'd like to replace or overlay my UITextField in place of the detailTextLabel but yet have all subview positioning stay in tact)? Just after creating a default cell, cell.textLabel.frame is returning 0 so I assume it doesn't get sized until the cell's 'layoutSubviews' gets invoked .. and obviously that is too late for what I need to do.
    Hope this question makes sense. I'm just looking for 'best practice' here.
    Message was edited by: LutherBaker
    Message was edited by: LutherBaker

    I think devs will be surprised at the flexibility in their current apps when/if a table it released. I'm of the opinion that little energy will be needed to move existing apps to a larger screen if at all. Think ratio, not general size.
    In terms of best practice...hold the course and let the hardware wonks worry about cross-over.

  • SAP Business One Best-Practice System Setup and Sizing

    <b>SAP Business One Best-Practice System Setup and Sizing</b>
    Get recommendations from SAP and hardware specialists on system setup and sizing
    SAP Business One is a single, affordable, and easy-to-implement solution that integrates the entire business across financials, sales, customers, and operations. With SAP Business One, small businesses can streamline their operations, get instant and complete information, and accelerate profitable growth. SAP Business One is designed for companies with less than 100 employees, less than $75 million in annual revenue, and between 1 and 30 system users, referred to as the SAP Business One sweet spot. The sweet spot covers various industries and micro-verticals which have different requirements when it comes to the use of SAP Business One.
    One of the initial steps during the installation and implementation of SAP Business One is the definition of the system landscape and architecture. Numerous factors affect the system landscape that needs to be created to efficiently run SAP Business One.
    The <a href="http://wiki.sdn.sap.com/wiki/display/B1/BestPractiseSystemSetupand+Sizing">SAP Business One Best-Practice System Setup and Sizing Wiki</a> provides recommendations on how to size and configure the system landscape and architecture for SAP Business One based on best practices.

    For such high volume licenses, you may contact the SAP Local Product Experts.
    You may get their contact info from this site
    [https://websmp209.sap-ag.de/~sapidb/011000358700001455542004#India]

  • Best Practice for disparately sized data

    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    A

    Angel 1058 wrote:
    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    AHi A,
    It depends... if there is a relationship between keys and sizes, e.g. if this or that part of the key means that the size of the value will be big, then you can implement a key partitioning strategy possibly together with keyassociation on the key in a way that it will evenly spread the large entries across the partitions (and have enough partitions).
    Unfortunately you would likely not get a totally even distribution across nodes because of having fairly small amount of entries compared to the square of the number of nodes (btw, which version of Coherence are you using?)...
    Best regards,
    Robert

  • Best practice: managing FRA

    I was not sure this was the most appropriate forum for this question; if not, please feel free to make an alternative suggestion.
    For those of us who run multiple databases on a box with shared disk for FRA, I am finding the extra layer of ASM and db_recovery_file_dest_size to be a minor inconvenience. The Best Practice white papers I have found so far say that you should use db_recovery_file_dest_size, but they do not specify how you should set it. Currently, we have been setting db_recovery_file_dest_size rather small, as the databases so far are small and even at 3x the database size, the parameter is still significantly smaller than the total disk available in that diskgroup.
    So, my question; is there any downside to setting db_recovery_file_dest_size equal to the total size of the FRA diskgroup for all databases? Obviously, this means that the amount of free space in the diskgroup may be consumed even if db_recovery_file_dest_size is not yet full (as reflected in the instance V$RECOVERY_FILE_DEST). But is that really a big deal at all? Can we not simply monitor the FRA diskgroup, which we have to do anyway? This eliminates the need to worry about an additional level of disk management. I like to keep things simple.
    The question is relevant to folks using other forms of volume management (yes, I know, ASM is "not a volume manager"), but seems germane to the ASM forum because most articles and DBAs that I have talked to are using ASM for FRA.
    Most importantly, what ramifications does "over-sizing" db_recovery_file_dest_size have? Aside from the scenario above.
    TIA

    As a general rule, the larger the flash recovery area(db_recovery_file_dest_size ), the more useful it becomes. Ideally, the flash recovery area should be large enough to hold a copy of all of your datafiles and control files, the online redo logs, and the archived redo log files needed to recover your database using the datafile backups kept under your retention policy.
    Setting the size of DB_FILE_RECOVERY_DEST_SIZE must be based on following factors.
    1) your falshback retention target,
    2) what all files you are storing in flashback and
    3) if that inclueds backup then the retention policy for them or how often you move them to tape
    The bigger the flash recovery area, the more useful it becomes. Setting it much larger or equal to you FRA disk group does not cause any [b]overhead that is not known to Oracle.
    But there are reasons why Oracle lets you define a disk limit, which is the amount of space that Oracle can use in the flash recovery area out of your FRA disk group.
    1) A disk limit lets you use the remaining disk space for other purposes and not to dedicate a complete disk for the flash recovery area.
    2)Oracle does not delete eligible files from the Flash Recovery Area until the
    space must be reclaimed for some other purpose. So even though your database size is 5GB your restention target is very small but if your recovery_dest_size is much larger it will just keep filling.
    3)Say in my case I have one FRA disk group of 150GB shared by 3 different databases. Based on the nature and criticaly of the database I have different size requirement of flashback recovery area for these databases. So I use varrying db_file_recovery dest_size (30GB, 50GB,70GB) respectively for meeting my retention target or the kind of files and backup I want to store them in FRA for these databases.
    Oracle Internal Space management mehcanism for Falshback recovery area itself is designed in such way that if you define your db_recovery_file_dest_size and DB_FLASHBACK_RETENTION_TARGET at a optimal value, you wont need any further administration or management. If a Flash Recovery Area is configured, then the database uses an internal algorithm to delete files from the Flash Recovery Area that are no longer needed because they are redundant, orphaned, and so forth. The backups with status OBSOLETE form a subset of the files deemed eligible for deletion by the disk quota rules. When space is required in the Flash Recovery Area, then the following files are deleted:
    a) Any backups which have become obsolete as per the retention policy.
    b) Any files in the Flash Recovery Area which has been already backed up
    to a tertiary device such as tape.
    c) Flashback logs may be deleted from the Flash Recovery Area to make space available for other required files.
    NOTE: If your FRA is 100GB and 3 databases have thier DB_RECOVERY_FILE_DEST set FRA then logically the total of db_recovery_file_dest_size for these 3 databases should not exceed 100GB. Even though practically it allows you to cross this limit.
    Hope this helps.

  • Best practices for managing Movies (iPhoto, iMovie) to IPhone

    I am looking for some basic recommendations best practices on managing the syncing of movies to my iPhone. Most of my movies either come from a digital camcorder into iMovie or from a digital Camera into iPhone.
    Issues:
    1. If I do an export or a share from iPhoto, iMovie, or QuickTime, what formats should I select. I've seem 3gp, mv4.
    2. When I add a movie to iTunes, where is it stored. I've seen some folder locations like iMovie Sharing/iTunes. Can I copy them directly there or should I always add to library in iTunes?
    3. If I want to get a DVD I own into a format for the iPhone, how might I do that?
    Any other recommedations on best practices are welcome.
    Thanks
    mek

    1. If you type "iphone" or "ipod" into the help feature in imovie it will tell you how.
    "If you want to download and view one of your iMovie projects to your iPod or iPhone, you first need to send it to iTunes. When you send your project to iTunes, iMovie allows you to create one or more movies of different sizes, depending on the size of the original media that’s in your project. The medium-sized movie is best for viewing on your iPod or iPhone."
    2. Mine appear under "movies" which is where imovie put them automatically.
    3. If you mean movies purchased on DVD, then copying them is illegal and cannot be discussed here.
    From the terms of use of this forum:
    "Keep within the Law
    No material may be submitted that is intended to promote or commit an illegal act.
    Do not submit software or descriptions of processes that break or otherwise ‘work around’ digital rights management software or hardware. This includes conversations about ‘ripping’ DVDs or working around FairPlay software used on the iTunes Store."

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best Practice For Database Parameter ARCH_LAG_TARGET and DBWR CHECKPOINT

    Hi,
    For best practice - i need to know - what is the recommended or guideline concerning these 2 Databases Parameter.
    I found for ARCH_LAG_TARGET, Oracle recommend to setup it to 1800 sec (30min)
    Maybe some one can guide me with these 2 parameters...
    Cheers

    Dear unsolaris,
    First of all if you want to track the full and incremental checkpoints, make the LOG_CHECKPOINT_TO_ALERT parameter TRUE. You will see the checkpoint SCN and the completion periods.
    Full checkpoint is being triggered when a log switch happens and checkpoint position in the controlfile is written in the datafile headers. For just a really tiny amount of time the database could be consistent eventhough it is open and in read/write mode.
    ARCH_LAG_TARGET parameter is disabled and set to 0 by default. Here is the definition for that parameter;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
    If you want to set this parameter up the Oracle recommends it to be 1800 as you have said. This can subject to change from database to database and it is better for you to check it by experiencing it.
    Regards.
    Ogan

  • Best Practice for Portable Home Directories

    What are the 'best practice' directories to sync for Portable Homes - at login and in the background. I want to make my user experience a little better than it is now.
    Login and logout take about 2 minutes - even over ethernet 100Mb, and longer using Airport, and 'background' home directory syncing seems to always suck all of my network bandwidth - making apps like Safari unusable - even though I have barely changed anything in the folders I am syncing.
    My personal home directory is 1.5Gb, and I keep my Music, Pictures and Movies on the network - as Apple suggest.

    I generally recommend the following for the least impact on user experience:
    1. Put your server and clients that will use mobile accounts and portable homes on a Gigabit Ethernet switch. It's a small price to pay for much more customer satisfaction.
    2. Put more RAM in the server, especially if you're dealing with a few users with large homes or several users with moderately-sized (less than 1.0GB) ones. This will also let you employ server-side tracking (for 10.5 server).
    3. Only sync and login/logout. Use Workgroup Manager to define all portable preferences. Choose to manage login/logout sync, and specify the items to sync; for the whole home, use "~". Omit things like ~/.Trash. Choose to manage the background sync, but remove all items from the "sync these items" list. Choose to manage the background sync interval by setting it to manual. This way, the user doesn't accidentally configure a background sync: we've told it to sync nothing only we say it can.
    --Gerrit

  • Upscale / Upsize / Resize - best practice in Lightroom

    Hi, I'm using LR 2 and CS4.
    Before I had Lightroom I would open a file in Bridge and in ACR I would choose the biggest size that it would interpolate to before doing an image re-size in CS2 using Bicubic interpolation to the size that I wanted.
    Today I've gone to do an image size increase but since I did the last one I have purchased OnOne Perfect Resize 7.0.
    As I have been doing re-sizing before I got the Perfect Resize I didn't think about it too much.
    Whilst the re-size ran it struck me that I may not be doing this the best way.
    Follow this logic if you will.
    Before:
    ACR > select biggest size > image re-size bicubic interpolation.
    Then with LR2
    Ctrl+E to open in PS (not using ACR to make it the biggest it can be) > image re-size bicubic interpolation.
    Now with LR2 and OnOne Perfect Resize
    Ctrl+E to open in PS > Perfect Resize.
    I feel like I might be "missing" the step of using the RAW engine to make the file as big as possible before I use OnOne.
    When I Ctrl+E I get the native image size (for the 5D MkII is 4368x2912 px or 14.56x9.707 inches).
    I am making a canvas 24x20"
    If instead I open in LR as Smart Object in PS and then double click the smart icon I can click the link at the bottom and choose size 6144 by 4096 but when I go back to the main document it is the same size... but maybe if I saved that and then opened the saved TIFF and ran OnOne I would end up with a "better" resized resulting document.
    I hope that makes sense!?!?!?!
    Anyway I was wondering with the combo of software I am using what "best practice" for large scale re-sizing is. I remember that stepwise re-sizing fell out of favour a while ago but I'm wondering what is now the considered best way to do it if you have access to the software that was derived from Genuine Fractals.

    I am indeed. LR3 is a nice to have. What I use does the job I need but I can see the benefits of LR3 - just no cash for it right now.

  • Best practice for install oracle 11g r2 on Windows Server 2008 r2

    Dear all,
    May I know what is the best practice for install oracle 11g r2 on windows server 2008 r2. Should I create a special account for windows for the oracle database installation? What permission should I grant to the folders where Oracle installed and the database related files located (datafiles, controlfiles, etc.)
    Just grant Full for Administrators and System and remove permissions for all others accounts?
    Also how should I configure windows firewall to allow client connect to the database.
    Thanks for your help.

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

  • Best practice - updating figure numbers in a file, possibly to sub-sub-chapters

    Hi,
    I'm a newbie trying to unlearn my InDesign mindset to work in FrameMaker. What is best practice for producing figure numbers to accompany diagrams throughout a document? A quick CTRL+F in the Framemaker 12 Help book doesn't seem to point me in a particular direction. Do diagrams need to be inserted into a table, where there is a cell for the image and a cell for the figure details in another? I've read that I should  use a letter and colon in the tag to keep it separate from other things that update, e.g. F: (then figure number descriptor). Is there anything else to be aware of, such as when resetting counts for chapters etc?
    Some details:
    Framemaker12.
    There are currently 116 chapters (aviation subjects) to make.
    Each of these chapters will be its own book in pdf form, some of these chapters run to over 1000 pages.
    Figure number ideally takes the form: "Figure (a number from one of the 1-116 chapters used) - figure number" e.g. "Figure 34 - 6." would be the the 6th image in the book 'chapter 34'.
    The figure number has to cross reference to explaining text, possibly a few pages away.
    These figures are required to update as content is added or removed.
    The (aviation) chapter is an individual book.
    H1 is the equivalent of the sub-chapter.
    H2 is the equivalent of the sub-sub-chapter.
    H3 is used in the body copy styling, but is not a required detail of the figure number.
    I'm thinking of making sub-chapters in to individual files. These will be more manageable on their own. They will then be combined in the correct order to form the book for one of these (1 of 116) subject chapters.
    Am I on the right track?
    Many thanks.
    Gary

    Hi,
    Many thanks for the link you provided. I have implemented your recommendation into my file. I have also read somewhere about sizing anchored frames to an imported graphic using 'esc' + 'm' + 'p'.
    What confuses me, coming from InDesign is being able to import these graphics at the size they were made ( WxH in mm at 300ppi) and keeping them anchored to a point in the text flow.
    I currently have 1 and 2 column master pages built. When I bring in a graphic my process is:
    insert a single cell table on the next space after current text > drop the title below the cell > give the title a 'figure' format. When I import a graphic it either tries to fit it in the current 2 column layout with only part of it showing in a box which is half the width of a single column!
    A current example: page 1 (2 column page) the text flows for 1.5 columns. At the end of the text I inserted a single cell table, then imported and image into the cell.
    Page 2 (2 column page) has the last line of page 1's text in the top left column.
    Page 3 (2 page column)  has the last 3 words of page 1 in its top left column.  The right column has the table in it with part of the image showing. The image has also bee distorted, like it's trying to fit. These columns are 14 cm wide, the cell is 2 cm wide at this point. I have tried to give cells for images 'wider' attributes using the object style designer but with no luck.
    Ideally I'm trying to make 2 versions. 1) an anchored frame that fits in a 1 column width on a 2 column width page. 2) An anchored frame that fits the full width of my landscape pages (minus some border dimension),  this full width frame should be created on a new proceeding page. I'd like to be able drop in images to suit these different frames with as much automation as possible.
    I notice many tutorials tell you how to do a given area of the program, but I haven't been able to find one that discusses workflow order. Do you import all text first, then add empty graphic boxes and/or tables throughout and then import images? I'm importing text from Word,  but the images are separate, having been vectored or cleaned up in Photoshop - they won't be imported from the same word file.
    many thanks

Maybe you are looking for

  • I setup my android sync account with an incorrect email address. How can I delete or change the email adress or account.

    When I setup my sync account on both my pc and my andriod I left off the 'L' in gmail. I type fast and I do it a lot. Normally not a problem because I get an 'incorrect email' error message. So I synced and everything - went fine. However I have no e

  • Build error while creating a DC

    Hi All I am trying to expose a project component in some other project using the <i><b>development components (DCs)</b></i> but when I try to build the DC after putting the required component in the Public part of the project it removes the external

  • What quicktime movie settings for use in Logic

    Hi, I'm working on a 50 track mix of a live gig which was also filmed and I have to work to picture. I was advised to ask the video editor to give me the following: Quicktime Sorenson codec (or Sorenson3). Limit data rate to 500 kbps, 320 x 240. But,

  • Urgent help, please? Migrating to 10g (10.1.2.1.0) from 9i (9.0.3.5)

    Hi there just recently tried to migrate my projects over to JDeveloper 10g from 9i, and, while all the files seem to have come across ok, my projects have lost their structure, and in my most recent project, every single other existing bc4j component

  • Routine in transfer rules

    I am using the datasrource 0MM_PUR_VE_01,if the score for Evaluation Criteria or the score for subcriterion is Zero then this datasource is not picking those particular records Vendor    Eval.Crite      Subcriterion score   Score for Subcrit Eval. Cr