Impact of specifiying max datatype size

Our database stores multiple vendor versions of the same information. The same attribute could be different lengths for different vendors i.e NUMBER(10) for one vendor and NUMBER(13) for another. Same with VARCHAR2 fields. The vendors change the lengths often and we need to minimize the DDL changes. What is the negative impact of specifying say VARCHAR2(2000) and NUMBER as datatypes to insulate our database from length changes? There are a total of 1200 attributes.
When executiung a query does oracle allocate the maximum length for each data type as per the table definition in memory cause it does not know how many characters were stored on disk? If so is this max allocation only for the DISK to intermediate buffer? AFter the immediate buffer it only loads the buffer cache and program work area with the actual length of data? As they say memory is cheap these days so is this a non issue?
We need to know the impact before we make this design decision.
Thanks in anticipation.

Well, unless you take this to the extreme case of using CLOB's for all your fields (and you are not using CHAR or NCHAR) you should (technically) be fine to go with the higher values.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1542606219593
But that gets away from one purpose of being able to specify a size on data elements, that it enforces a business rule that "column X cannot exceed a value of Y", so i would think you'd have to weigh that into your design considerations somewhere.

Similar Messages

  • Siginficance of max heap size mentioned in configtool

    Hi all,
    could anyone please tell me the exact significance of
    max heap size mentioned in configtool in SAP Netweaver in
    <b>1)Instance_ID</b>
    -servers general
    -message servers and bootstrap
    <b>2)Dispatcher_ID</b>
    -general
    -bootstrap
    <b>3)Server_ID</b>
    -general
    -bootstrap
    Which of these do i change to improve the performance?
    I tried changing the max heap size specified in
    <b>Server_ID</b>
    -general
    but i got the following error while trying to start the server  in std_server0.out:
    node name   : server0
    pid         : 3452
    system name : N02
    system nr.  : 01
    started at  : Tue Mar 20 21:53:37 2007
    Reserved 1610612736 (0x60000000) bytes before loading DLLs.
    [Thr 1912] MtxInit: -2 0 0
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Regards,
    Namrata.

    Hi,
    The biggest impact to runtime performance will be adjusting the heap size of the server JVM. This is done in Server_ID->general.  The JVM parameters entered here take precedence over parameters in Instance_ID->servers general.  The server job by far do the most work in the Java engine and so it is very important that the JVM for the server node is tuned to handle the workload.  Tuning the server JVM or even adding additional server nodes is dependent on the workload and the amount of work on the system.
    Adjusting the heap for the other JVMs will have much less of an impact than adjusting the heap in the server JVM.
    The dispatcher JVM heap settings may have a slight impact during runtime, but compared to the server jobs the dispatcher does relatively little work.  Depending on your situation you may need to tune the dispatcher a little, but my experience has been that the default value for the dispatcher is usually sufficient.
    The values for all of the bootstrap jobs may have an impact on startup time, but they will have no impact on runtime since these jobs go away once the system is up.  From what I have seen the defaults values for the bootstrap jobs are sufficent.
    I never adjust anything under Instance_ID, I'm not sure what these parameters are used for except for maybe default values when adding server nodes.  Maybe someone out there knows.
    Hope this helps.
    Regards,
    Kolby

  • Sophos AV max scanning size / timeout

    Hi,
    I haven't found any changeable settings for max. scanning size or scanning timeout on a S160 v7.1.3 with Sophos AV.
    In the GUI under "Security Services-->Anti-Maleware"  it shows  "Object Scanning Limits: Max. Object Size:  32 MB".
    I'm not able to change it. This parameter seems not to belong to the Sophos AV.
    I can change it only after enableing Webroot or McAfee first.
    The CLI has no commands for adjusting AV settings.
    How can I control the max. scanning size or scanning timeout with Sophos-AV?
    Has it fixed values for it?
    Does anyone have an idea, how it works?
    Kind regards,
    Manfred

    With administrator rights, the value should be editable.  The object size is applied to all scanners which have been licensed and enabled on the appliance.
    ~Tim

  • "max-pool-size"   what is it good for?

    SCreator simple CRUD use:
    After a while I get:
    " Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connection"
    Which is odd, because its just me using the server/database. It looks like every tiime I run a test, another conection is lost.
    Do I have to restart the server? Is there a way to say "its only me, reuse a single connection"
    why does "connection pooling" make life harder?
    Can I turn it of?
    cheers
    cts

    I got the same error in my JSC project. I search for few days and i found the solution. I do a mistake in my page Navigation. I forgot a slash in <to-view-id>.
    A bad example:
    <navigation-rule>
    <from-view-id>/*</from-view-id>
    <navigation-case>
    <from-outcome>page13</from-outcome>
    <to-view-id>page13.jsp</to-view-id>
    </navigation-case>
    A good example:
    <from-view-id>/*</from-view-id>
    <navigation-case>
    <from-outcome>page13</from-outcome>
    <to-view-id>/page13.jsp</to-view-id>
    </navigation-case>
    with this mistake, the afterRenderedResponse() was never called, and the ResultRowSet was never closed.
    Korbben.

  • SharePoint - Error_1_Error occurred in deployment step 'Add Solution': Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was rea

    Hi,
    I am Shanmugavel, SharePoint developer, 
    I am facing the below SharePoint 2013 deployment issue while deploying using VS2012.
    If i will deploy the same wsp or existing wsp
    (last build) using direct powershell deployment, the solution adding properly, but the same timeout exception coming while activation the features.  Please find the below error.
    I tried the below activists:
    1. Restarted my dev server, DB server. 
    2. tried the same solution id different server
    3. tried existing wsp file (last build version)
    4. Deactivated all the features, including project Active deployment configuration.... but still i am facing the same issue.
    I hope this is not coding level issue, because still my code is not start running, before that some problem coming.
    Please help me any one.....  Last two days i am struck because of this...

    What you need to understand is the installation of a WSP does not do much. It just makes sure that you relevant solution files are deployed to the SharePoint farm.
    Next comes the point when you activate the features. It is when the code which you have written to "Activate" certain features for your custom solution.
    Regarding the error you are getting, it typically means that you have more connections (default is I guess 100) open for a SQL database then you are allowed to.
    If you have a custom database and you are opening a connection, make sure you close it as well.
    Look at the similar discussion here:
    The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool
    size was reached[^]
    I would suggest further to look at the
    ULS logs[^] to get better insight.
    Manas Bhardwaj's Stream : www.manasbhardwaj.net

  • Need info about max HDD size available for Satellite Pro M30-813

    Hello,
    The following question is mainly to be addressed to authorized Toshiba support personnel. What exactly is the limitation of a maximum size of an internal HDD that I could use with my Satellite Pro M30-813?
    Recently, I have bought and installed seagate 160 GB SATA drive, onto which I have successfully installed WXP Pro and have been running it for quite a while with no problems. Recently, I have been copying large amount of data from an external hard drive to my new internal disk, and as the files were being copied as I noticed having about 50 GB free space left, I had experienced windows "delayed write failed" and a massive partition failure with no possibility to recover data. The system would no longer boot and the whole MBR was damaged. As the result, I have lost all data on my new disk.
    Although, I realize that Toshiba is not responsible for additional hardware that I use with my laptop and that is not officially supported by Toshiba, I am certain that as an end user of a Toshiba product I have the right to know about a max HDD size limitation information for my notebook model. Therefore, I request Toshiba technical support representative to give me a straight official answer to my question.
    Thank you in advance,
    Andrejs
    (You may also contact me privately at my e-mail address)

    Hi Andrew
    > The following question is mainly to be addressed to authorized Toshiba support personnel
    I think you are in the wrong area if you are looking for an answer from an authorized Toshiba support.
    This is a Toshiba user-to-user forum! You will meet here Toshiba notebook owner and enthusiasts who share knowledge and tries solve problems but nobody from Tosh :(
    I could provide my experience with the M30 Satellite and the HDD upgrade possibilities.
    In my knowledge the Sat M30 supports a 40GB, 60GB and 80GB HDD for sure.
    In my opinion you could use the 100GB HDD but bigger HDDs will not run and functions correctly.
    So switch to a lower HDD size and enjoy the notebook!
    Ive goggled a little bit and found compatible HDD and the part numbers
    HITACHI GBC000Z810 -> 80GB
    HITACHI GBC00014810 -> 80GB
    TOSHIBA HDD2188B -> 80GB
    HITACHI G8C0000Z610 -> 60GB
    HITACHI G8BC00013610 -> 60GB
    TOSHIBA HDD2183 -> 60GB
    TOSHIBA HDD2184 -> 60GB
    I hope this could help you a little bit!
    Best regards

  • How to set  max-heap-size outside the jnlp file?

    Due to bug_id=6631056 It may not be possible to specify max-heap-size within
    the JNLP file for certain jnlp java applications.
    Are there other possibilities to specify this Jvm parameter?
    In the ControlPanel there is the possibility to specify Xmx for applets but not for jnlp.
    I have tried to add properties like
    "deployment.javaws.jre.0.args=Xmx\=128M" without success
    Many thanks

    Even in JNLP also you can specify the max heap size
    <j2se version="1.5+" initial-heap-size="128m" max-heap-size="512m"/>
    Thanks,
    Suresh
    [http://sureshdevi.co.in|http://sureshdevi.co.in]

  • Different max photo sizes in emails, beams, photo stream?

    I've noticed differences in file sizes when emailing photos from the iPhoto app, when inserting in an email, and when emailing directly from the photo.  I've found that iOS 6 is able to send the largest file when inserting it into an email. If I select email from iPhoto, the full size option is a fraction of that. If I email a photo directly from the photo itself, the full size seems to be somewhere in-between.  I sent a panoramic photo three times, once using each method. Each time I selected to send the FULL file sizes, and each time they were different:
    iPhoto: 1918K
    insert in email: 12004K
    email from photo itself: 3529K
    Is there a reason iOS 6 does this? 
    Why are the full photo file sizes not universal?
    Are the max file sizes being uploaded to photo stream?
    What is the best way to get the largest photo jpg off of the iPhone aside from syncing?

    John,
    You have several options for re-sizing photos, though (continuing from what V.K. has already stated). If you want to "customize" the size of each photo, you'll need to do so before you attach them to an email. They can be re-sized in iPhoto, or they can be opened in Preview and exported as whatever size you like.
    If your images are in iPhoto, select one, then choose File>Export. Use the export dialogue to select the size and compression of the resulting file, save it to someplace like your Desktop, then attach it to an email. The choices, here, are the same as they would be within Mail, but will be applied on an image-by-image basis.
    Scott

  • What is the max file size that a Adapter Engine [J2SE] can handle ?

    We are having a scenario where a 4 GB compressed file needs to be sent across without mapping, I'm trying to explore the possible options to tackle this scenario.
    Some of the options, we are considering :
    - Sending a file from one Adapter Engine [J2SE] to another Adapter Engine [J2SE] bypassing PI. Adapter Logs will be monitored. What is the maximum file size that can be then handled assuming a standard installation ?
    - What is the max file size that Adapter Engine [J2SE] can send through PI without mapping?
    Any help with the above options or with any other possible solutions to handle extremely large file transfers will be much appreciated.

    Hi,
    j2se is not a good option here
    if you have 4gb then you need a copy solution
    a) build an proxy that will just copy the file from sender dest to receiver
    b) build an adapter that will do the same as proxy)
    this way processing of this message will be controlled with SAP PI/XI
    it will be as quick as copy & paste of the 4gb file
    I did the a) for one of the flow for 300 mb and it works perfectly
    Regards,
    Michal Krawczyk

  • Automation in saving a image file with a specific max file size

    Hi everyone,
    I hope someone can help me by this.
    Background info:
    We got several image files every 2 weeks which should be edited and mainly reduced in size for web purpose. This work needs 1 work day for one man/woman to do, because he/she has to open the file save for web and then set the quality to a value were the file is nearly about 150-200 KB in size.
    The images are different, some have few colors, some have a lot of colors and there are also different in resolution. But they should not be reduced in resolution, only in quality. All other specs of the image should be kept 
    Is there any possible script, plug-in or similar which can do the same (Saving with a specific max. file size) in some automatic and faster way?
    Any help is really appreciated!
    Thanks in advance!
    Kind regards
    Packesel

    *push*
    Hi everyone,
    I still need help with this. Is there any tool (OS X) or script for Photoshop who can fulfill this (see title).
    ANY help is really appreciated!
    Thanks in advance.
    Regards
    Packesel

  • What is te max. file size for lightroom

    i have some very big files. lightroom says that the are to big to catalog. but what is the max. file size ?

    There is no MAX file size.
    As John points out, the limits are on the number of pixels, not file size

  • The reporting service web service connection pool reached the max pool size

    I got a problem that it throw an exception "The timeout peroid elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connection was in use and max pool size was reached."
    The satuation is our service use 15 thread to render report, but sometimes we met such exception I list above. I didn't change any configuration in rsreportserver.config, and it seems the connection to reportserver database from reporting service web
    service was not disposed.
    Is there any configuration I can modify to fix this issue?

    Hi Dexter,
    In your case, we can try to increasing the size of the connection pool to resolve the issue. By default, the Max Pool Size is 100. You can refer to the similar issue below:
    http://social.msdn.microsoft.com/Forums/en-US/c57c0432-c27b-45ab-81ca-b2df76c911ef/timeout-expired-the-timeout-period-elapsed-prior-to-obtaining-a-connection-from-the-pool?forum=adodotnetdataproviders
    Since the issue is related with ADO.NET. I suggestion you post the question in the following forum:
    http://social.msdn.microsoft.com/Forums/en-US/home?forum=adodotnetdataproviders
    It is appropriate and more experts will assist you.
    Regards,
    Alisa Tang
    Alisa Tang
    TechNet Community Support

  • Max locate size

    hello, does anyone know the max locate size that a SNASw router will set for the locate process ?
    For VTAM is 16K, for the Cisco Snasw ?
    I can't find this on the doc and I think my "location" problem depends on this.
    Many thanks in advance to the people that will reply.
    Cheers, Alex

    Hi Alex,
    in IOS 12.1 and 12.2 snaswitch is bounded by a locate size of 1K, which means if you have more than 8-10 uplinks defined you may run out of room and encounter less-desireable session paths or outright session failures.
    An enhancement was added in IOS 12.2T to make the size 4K. If you want to stay with IOS 12.1 or 12.2 (because they are more stable than 12.2T), you can work around the problem by only configuring uplinks to your primary NNS/DLUS and backup NNS/DLUS, and use connection network for all other uplinks (vnname on port). Then snaswitch would only include 3 tail vectors in each locate (one to the primary, one to the backup, and one to the virtual routing node.)
    - Ray

  • Max. disk size in A1000?

    Hi,
    Does anybody know what the max. disk size is that works in a A1000 array? We have so far used 76GB disks in an 8 slot A1000.
    Thanks

    Hello Erwin,
    SCSI is less limited than IDE. The next bigger size should work.
    The Seagate ST3146707LC (146.8GB - 10000 RPM Ultra-320) and Fujitsu MAT3147NC (146.8GB - 10000 RPM Ultra-320) are listed as compatible.
    http://sunsolve.sun.com/handbook_pub/Systems/A1000/component s.html#Disks
    Michael

  • Max File size in UFS and ZFS

    Hi,
    Any one can share what is max file size can be created in Solaris 10 UFS and ZFS ?
    What will be max size file compression using tar,gz ?
    Regards
    Siva

    from 'man ufs':
    A sparse file  can have  a  logical  size  of one terabyte.
    However, the  actual amount of data that can be stored
    in  a  file  is  approximately  one  percent  less  than one
    terabyte because of file system overhead.
    As for ZFS, well, its a 128bit filesystem, and the maximum size of a file or directory is 2 ^64^ bytes, which i think is somewhere around 8 exabyte (i.e 8192 petabyte), even though my calculator gave up on calculating it.
    http://www.sun.com/software/solaris/ds/zfs.jsp
    .7/M.
    Edited by: abrante on Feb 28, 2011 7:31 AM
    fixed layout and 2 ^64^

Maybe you are looking for