Increase maximum size of removable disk on Microph
I couldn't find an answer on this one. Someone knows if it's possible?
Thanks in advance.Message Edited by creativevin on 05-27-200707:3 PM
Message Edited by creativevin on 05-27-200707:36 PM
You would be better off creating a new metadata field of whatever size and storing your required value in that. If you really want to muck with the dDocName (I strongly recommend you don't) there are several database tables that need to be modified directly... you'll need an Oracle consultant to figure all that out.
Similar Messages
-
How to increase maximum size of contentID /dDocName?
Hi,
In our project requirement we have to create contentID based on few fields in some cases they are exceeding the max limit of contentID (i.e., 30). Is there any possible way to increase the max size of content id to 100? please let me know the possible ways...
We are presently using UCM 10Rg3 version.
Thanks in advanceYou would be better off creating a new metadata field of whatever size and storing your required value in that. If you really want to muck with the dDocName (I strongly recommend you don't) there are several database tables that need to be modified directly... you'll need an Oracle consultant to figure all that out.
-
Maximum size for Hard Disk in Lenovo 3000 N100
I have the Lenovo 3000 N100
it came with 100 GB Hard Disk 5400 RPM
What is the maximum Hard Disk that can be used. Can it be of 7200 RPM or only 5400 RPM?
Thanks for possible reply
Abraham
Solved!
Go to Solution.@ortegaluis: my hard disk crashed a few days back (i have 3000 n100 and currently running the machine on a 1 gb pen drive )
was planning to buy a 320/500 GB hdd : seagate or western digital or hitachi (since hitachi was the one intially installed)
Could you please tell me, which one would be best? (it seems you have experience in the field)
Do i need to look for specs any other than these: 7200 rpm, 2.5" ?
Thanks
PS: all those who want to know possible behaviors of hard disks before crashes, visit http://datacent.com/datarecovery/hdd/hitachi . Mine gave a clicking sound exactly like the one on the page. Worked for a few days before giving up. If you hear similar sounds, back up your data immediately unless you want to loose it like i did -
Is it safe to increase LUN size of ASM disk online 11gR2 ?
I have an Oracle 11gr2 ASM in a RAC, using an EMC SAN. I have one Disk Group (DATA), with one disk (DISK1), configured with external redundancy [Flash Recovery Area is non-ASM]. Red Hat Linux.
My storage is a 450GB LUN, and I want to grow it by 150GB. I should not add a 150GB LUN as a new disk, as the disks would be uneven. And I don't want to create a new disk group, as that's more admin.
So I want to increase my LUN from 450GB to 600GB, which can be done online by the UNIX admin. Then I can resize the disk in ASM and all should be OK. This is a production system.
Has anyone actually done this ? Does it work ? I have seen posts worried about changes to partition tables.924111 wrote:
Hi there.
Adding more luns to the ASM diskgroups one way to do it.
And as any way, have pros and cons (maybe no cons at all)
but, think in some cases you are charged per internal BU (yeah, as today with Cloud outside being more cheap) per task
hypothetically, Imagine 200 USD per each change per team involved
new lun
Storage
Unix
DBA
600 USD
now imagine, a fixed 50USD fee to make a lun grow as all is set, and DBA can handle as part of BAU
that is 600 vs 50That is only internal funny money - if they need the space, they (your internal client) will pay for it. If you just extend the LUN you still will give up parallelism gained by multiple LUNs and that parallelism could cost your client $$$ because of processing time charges.
Hypothetically, in a "future release", I would not be surprised if you are able to store your database files on a "cloud" architecture. -
UCM 11g - Increase maximum size of metadata?
Hello
Can someone please provide the steps to increase the maximum number of characters for metadata?
I see there is a thread for 10g that warns against performance issues: http://forums.oracle.com/forums/thread.jspa?threadID=2154981&tstart=0
Are the steps the same? If so, where do we add that variable in 11g?
-MitchYes, Yes (see here: http://download.oracle.com/docs/cd/E17904_01/doc.1111/e10726/c06_core_ref.htm#i1068045), and search for config.cfg, it should be located at IntradocDir/config/config.cfg
-
How to increase size of a Disk Group in ASM?
Hi,
I'm testing the ASM in AIX. I created a Logical Volume with size is, e.g., 10G. Then I assign this LV with a Disk Group in ASM. Now I increase the size of the LV using AIX's smit, e.g., to 20G. How can I increase the size of my Disk Group in ASM?
I understand that Oracle recommend to assign a whole physical hard disk to a Disk Group in ASM.Thus, the resize command of ASM can only reduce the size of Disk Group, but cannot increase the size of it?
Any advice?
PhamDisk Group is a logical term, phyical size of a diskgroup is determined by the disks belonging to a diskgroup. So you cannot increase a diskgroup, only the disks in it. You can resize a disk up to its physical limit (resize option of ALTER DISKGROUP statement).
Werner -
How can i increase the maximum size of a text column in Access
hello,
i am inserting data in the MS-Access database through my servlet and am getting the exception whenever the length of the text data is more than 255 characters as in Access the maximum size for character data column is 255 can anyone tell me if there is any way by which i can increase the maximum limit of the column size in Access ,i.e. to make it more than 255.A text field in Access has a maximum size of 255 characters. If you want to store text information in Access larger than 255, you need to use the memo rather than text data type. There are special considerations for using an Access memo data type and accessing it through JDBC. Specifically, I believe it must be accessed as a binary object (BLOB), rather than text when using the JDBC-ODBC bridge. There are lots of discussions within these forums regarding how to manage, and the issues with using the Access memo data type.
Good luck! -
Hi, Is there a way to increase the maximum size of a download? I'm trying to download Adobe premiere pro (1GB) and I'm getting an error message that says my maximum download is 10 MG
That's between you and your internet service provider. By the way, you posted to the 10.3 forum. 10.3 can't run on your MacBook Pro. Don't forget the following facts:
b= bit
B = Byte
8 bits in a byte
A typical DSL connection has 1 Mbps speed or 128 kBps speed.
At that speed
1 minute gives you 7.5 MB
10 minutes gives you 75 MB
100 minutes gives you 750 MB
A typical cable connection can be 5 times faster although some are 30 times faster.
A typical fiber connection is 15 times faster and some are 50 times faster.
Ask what your speed is rated at. -
Maximum external Hard Drive size for Airport Disk?
Is there a maximum size for the hard drive that's being connected to the USB port on the new Airport Extremes?
Or will it support up to the maximum that's allowed by the USB chipset?Since it allows standard USB hard drives to connect, the limitation should have nothing to do with USB or the AirPort Extreme base station (AEBS). The limitation should only be with the IDE/ATA disk controller within the USB enclosure.
-
Maximum size possible of hard disk in my IMAC G5 20 inch?
I am using an IMAC G5, 2GHz with a 400 Gb hard disk.
I would like to buy a larger disk, 500 Gb minimum or even 750 Gb.
- What is the maximum size the IMAC can handle?
- Where would I find a document on apple.com to provide further details on the hard disk specifications?
- what is the interface in use for the disk?
thanks in advance for your help.
PatrickBonjour,
The iMac G5 uses the Serial ATA (SATA) interface in the standard desktop 3.5" size.
You can install any capacity of drive you can buy on the market today.
Installation Instructions (4.5Mo).
mrtotes -
Maximum size HardDrive for Ultra 30
Does anybody know what the maximum size harddrive is for an Ultra 30.
I've been able to put a 40+ gig harddrive into an Ultra 5, but the Uktra 30 does not seem to accept a 38+ gig harddrive. It gives a trap error that I can't remember off hand.
Thank you,
JamesHello James,
larger disks (> 36 GB) should work fine. You might even use one of these 143 GB drives that cost a multiple of what the Ultra 30 is worth.
Reseat the harddisk, then use probe-scsi-all at the ok-prompt. Is the harddisk detected ?
Probably your drive was previously used with another operating system and therefore contains partition information (label) that Solaris doesn't like.
Unfortunately your description is very vague. When did the problem appear (during booting from the first harddisk, when this drive was inserted in the second bay OR when invoking format after booting from CD) ? Maybe the drive is damaged and interfers with the other devices on the SCSI bus (first harddisk, CD-drive).
In the first case install this harddisk (temporarily) in the first bay (the original one removed) and boot from CD (into single-user mode with verbose display). If the boot fails, the harddisk is bad.
In the second case it might help if you remove the partition information when installed in the previous system (PC).
Happy New Year !
Michael -
Maximum size of a robohelp project
Hi All,
Assuming you have a high-powered computer, what is the
maximum size you would recommend for a RoboHelp 7.0 project? Is
there a limit specified? Is there a limit for the number of topics
allowed?
Thanks,
JenPeter, I did draw the correct conclusion, but perhaps I did
not phrase it very well.
If the user generates Printed Documentation from a RoboHelp
project, then the size of the project (that is, the part of it that
is used to create the Printed Documentation) does have size limits
(or the user needs to workaround this problem by merging multiple
DOC/PDF files afterwards).
Indeed, this only applies for projects from which the user
generates Printed Documentation.
Our high-end PCs have a very large amount of RAM (as much as
the latest motherboards can handle). We can easily manipulate PDF
and DOC files with thousands of pages. But generating more than
1,000 pages of Printed Documentation from RoboHelp seems to be
problematic.
We work with a lot of reports that contain many hundred and
often several thousand pages. Only RoboHelp's Printed Documentation
cannot handle it.
Perhaps I should add that the size of RoboHelp projects
itself does not necessarily slow things down. It is to total amount
of complexity that makes life difficult for RoboHelp. 1,000 pages
of plain HTML and a basic TOC does not cause problems. But tons of
hyperlinks, extensive Index, See Also, Browse Sequences, image
maps, etc. will cause troubles for these same 1,000 pages.
Especially removing items from the database can take some time
then, and increase the likelyhood of a corrupted database.
-- Peter, I hope that this aligns with your
perspective. -
What is the maximum size limit for a KM document to be uploaded?
Hi,
1. I have a requirement, wherein the user wants to upload a document which is more than 448MB in KM.
I am not sure what is the maximum limit that standard SAP provides by default, which I can advice the user at the first place.
2. what if we want to increase the max limit of the size?Where do we do that, can you suggest me the path for the same?
Thanks in advance.
Regards
DKHello,
CM Repository in DB Mode:
If there is a large number of write requests in your CM usage scenario, set up the CM repository in database mode. Since all documents are stored in the database, this avoids unintentional
external manipulation of the data.
CM Repository in FSDB Mode:
In this mode, the file system is predominant. If files are removed from or added to the file system, the database is updated automatically.
If you mainly have read requests, choose a mode in which content is stored in the file system. If this is the case, make sure that access to the relevant part of the file system is denied or
restricted to other applications.
What is the maximum size of the document that we can upload in a CM (DB) and CM (FSDB) without compromising the performance ?
There are the following restrictions for the databases used:
· Maximum number of resources (folders, documents) in a repository instance
FSDB: No restriction (possibly limited by file system restrictions)
DBFS and DB: 2,147,483,648
· Maximum size of an individual resource:
FSDB: No restriction (possibly limited by file system restrictions)
DBFS: 8 exabytes (MS SQL Server 2000), 2 GB (Oracle)
DB: 2 GB
Maximum length of large property values (string type):2,147,583,647 byte
What is the impact on the performance of Knowledge Management Platform and on Portal Platform when there are large number of documents that are in sizes somewhere from 100 MB to 500 MB or more.
The performance of the KM and Portal platform is dependent on the type of activity the user is trying to perform. That is a heavy number of retrievals of the large documents stored in the repository for
read/write mode decreases the performance of the patform. Searching and indexing in the documents will also take a propertionate amount of time.
For details, please refer to,
http://help.sap.com, Goto "KM Platform" and then,
Knowledge Management Platform > Administration Guide
System Administration > System Configuration
Content Management Configuration > Repositories and Repository
Managers > Internal Repositories > CM Repository Manager
Technically speaking the VM has a restriction according to plattform. W2k is somewhere around 1,2G and Solaris, I believe, 4G.
Say for instance I was on a W2k box I allotted 500+ for my J2EE Engine that would leave me with the possiblity to upload close to 600mb documents max.
See if the attached documents (610134, 634689, 552522) can provide you some guidance for setting your VM to meet your needs.
SUN Documentation
Java HotSpot VM Options
http://java.sun.com/docs/hotspot/VMOptions.html
How to tune Garbage collection on JVM 1.3.1
http://java.sun.com/docs/hotspot/gc/index.html
Big Heaps and Intimate Shared Memory (ISM)
http://java.sun.com/docs/hotspot/ism.html
Kind Regards,
Shabnam. -
how to increase /tmp size in linux...?
so many method available on internet but i want any tested and authentic way.
please check the info of current system.
df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 8254240 2081408 5753540 27% /
/dev/sda1 194442 12140 172263 7% /boot
none 4154288 0 4154288 0% /dev/shm
/dev/sda7 314043080 113563632 184526908 39% /home
/dev/sda6 1035660 34228 948824 4% /tmp
/dev/sda3 12096756 6632992 4849280 58% /usr
Regards,
AminYour problem stems from the initial setup, which included creating a separate disk partition for the /tmp directory. Sophisticated disk partitioning was a common task in past to prevent a bad process from filling up a disk and hence stop the core system from functioning in the end, or to reserve disk space for certain logical areas. Considering how fast and cheap disk space has become, there is more reason to avoid partitioning were it is not essentinally necessary.
If you need more tmp space, e.g. to Install Oracle database, you can assign a temporary tmp directory and put it under /home.
<pre>
sudo mkdir /home/tmp2
sudo chmod 1777 /home/tmp2
</pre>
Set the tmp directory prior to starting your application:
<pre>
export TMPDIR=/home/tmp2
./runInstaller.sh
</pre>
To permanently relocate /tmp you will need to boot the system into single user mode, edit /etc/fstab to remove your tmp partiton, and create a /tmp directory with 1777 permissions under root, or create a symlink to another location. -
Aperture database maximum size
is there any recommendet maximum size for Libs ?
I´m using an iMac 4GB RAM Intel dual coreDon't know what the absolute limit might be; there are some here with well over 100,000 images. Practical limits are:
-- Around 10,000 images in a single project. As I think of projects as rolls of film, my largest is only 200-300 images.
-- Physical disk space. A "managed" library cannot span more than one logical volume. But the largest element in your library is likely to be your master images, not your indices or previews, so by moving your master images out of the library ("referenced" masters) you can scatter them over any number of volumes. As shown by Kevin Doyle's research, only the indices/versions/previews need be on a fast drive. So, with the masters removed from your library, and the library placed on a fast, internal drive, you can manage a lot of images. Takes a little care and attention, but works very well. (Sierra Dragon, among others, is a strong proponent of this structure. Search some of his posts.)
The occasional defrag of the library itself speeds up scrolling, searching, and exporting. The master images themselves, as they are never rewritten, do not fragment. (Unless, of course, they were fragmented when first written.)
What is your problem or worry?
Message was edited by: DiploStrat
Maybe you are looking for
-
Graphic Frame can not be renamed
FM-11 and FM-12: In the Object Properties there is no more a field Name. Hence a graphic frame on the reference pages can not be checked for its name or the name can not be edited. I have filed this as a bug. The dialogue in FM-12 and FM-11: In FM-10
-
Why do I get error -70125 when resetting encoder?
I have a simple VI that moves my stepper motor to a specific position (determined externally by a voltage I'm reading), then resets the motor position and the encoder position to both be zero. The motor position function works ok, but the reset encod
-
Compare previous with Current Value
Want to compare previous with current value and assign the right one.. COLA COLB 72 21356 41023 21356 90 78903 90 89078 90 78956 90 45632 41023 78903 41023 45632 Output I want is: COLA COLB 72 21356 72 21356 90 78903 90 89078 90 78956 90 45632 90 789
-
Does anyone know why my Shared Playlist in Itunes will not populate?
-
After upgrading iPad won't even turn on
How do you get it on? After upgrading, and out of town, I am frustrated it won't even work. Why does upgrading destroy them? That is what it seems happens!