Memory management - Data area
Hi,
I need to define a data area with 65 channels and a channel length of 10000000. With different configurations DIAdem shows up the following error messages:
1-The defined area is too large for this computer to reboot
2-Numeric value out of valid range (6400000000>2147483647)!
Error occurred while installing the data matrix
Does anyone know how to solve it? Minimum necessary memory RAM? Free space necessary on HD? I tried with DIAdem 8.1 & 9.1 with same result.
Help will be appreciated,
Thanks.
Thanks for your quick answer,
Actually I solved the problem splitting the Data files to reduce the length and after the analysis I join the results channels. I did it in this way because I didn’t want to expend much time modifying the code that I already had to analyse the data. My code performs several filters to the data and then searches all the positive peaks over zero and the negative peaks under zero in order to plot some histograms related to the peaks.
I think that your solutions are well suitable but I thought that how I already had the code that was possible to generate that memory matrix…
Now just for my knowledge: It’s possible to generate that data matrix with 1GB RAM memory? The problem could be that due to my company policy our HD are in FAT32? If I use a NTFS HD then it’s possible?
Thanks for your interest,
Marc.
Similar Messages
-
I need a memory management/ data storage recomendation for my IMac 2 GH Intel Core 2 Duo
Since giving my mid 2007 IMac a 2GB memory boost to accommodate Lion all has been well however my memory is full. I have a sizable ITunes library and 6000 photos in IPhoto, Me thinks I should store this all more effectively for the safety of the music and photos and for the well-being of the computer....but there seems to be choices. Is this where ICloud comes into play or time capsule or should I just get an external mini hard drive. Can anyone help me with some pearls of wisdom with data storage.
Greetings John,
There are two types of memory which you mention here.
The 2 GB memory you refer to is RAM. It is not used for storing information. Rather for giving the computer a place to do much of its work.
File storage is handled by the hard drive of the computer.
If your available hard drive space is getting low you can move larger files to a Mac Formatted external drive:
Faster connection (FW 800) drives:
http://store.apple.com/us/product/H2068VC/A
http://store.apple.com/us/product/TW315VC/A
http://store.apple.com/us/product/H0815VC/A
Normal speed (USB) drives:
http://store.apple.com/us/product/H6581ZM/A
Larger files can include entire databases like iTunes, iMovie, or iPhoto.
Keep in mind that if you move these items to an external drive you will have to have the drive plugged in and powered on to access the data on them. In addition, if you move important information off your internal drive to an external, you should be sure that your backup solution is backing up that external drive to keep your information safe.
iCloud is not a file storage solution and TimeCapsule is not suited for storing databases like those mentioned above (its meant primarily as a backup solution). I would stick with an external drive (1 to hold your big files and another one big enough to backup both your computer and the first drive).
Here are some other general computer clean up suggestions: http://thexlab.com/faqs/freeingspace.html.
Hope that helps. -
Oracle Automatic Memory Management
Are there any restrictions or best practices regarding the setting of Oracle Automatic and Manual Memory Management (in particular the setting of these parameters: SGA_TARGET, SGA_MAX_SIZE, MEMORY_TARGET, MEMORY_MAX_TARGET) on various Oracle instances on the same server/node/virtual machine. In other words can the memory management schemes be mixed among various Oracle instances on the same server/node/virtual machine. For example suppose a server houses three Oracle instances - inst01, inst02, inst03. Can inst01 and inst03 use automatic memory management and inst02 use manual memory management. Are there any restrictions or best practices that should be noted? If there are multiple Oracle instances on the same server, is it a requirement that they all follow the same memory management scheme or is it a best practice to do so? Obviously, the total memory requested for the Oracle instances combined would not exceed the total physical memory available on the server.
Note: we are currently using Oracle 11g R2 specifically 11.2.0.1 on Solarissbing52 wrote:
Are there any restrictions or best practices regarding the setting of Oracle Automatic and Manual Memory Management (in particular the setting of these parameters: SGA_TARGET, SGA_MAX_SIZE, MEMORY_TARGET, MEMORY_MAX_TARGET) on various Oracle instances on the same server/node/virtual machine. Not really or at least I am not aware of.
In other words can the memory management schemes be mixed among various Oracle instances on the same server/node/virtual machine. Each instance is going to work individually so the answer is yes .
For example suppose a server houses three Oracle instances - inst01, inst02, inst03. Can inst01 and inst03 use automatic memory management and inst02 use manual memory management.Yes.
Are there any restrictions or best practices that should be noted? If there are multiple Oracle instances on the same server, is it a requirement that they all follow the same memory management scheme or is it a best practice to do so?As I said, not that I am aware of. As long as you are able to accommodate the memory requirements under your installed RAM, you should be okay.
Obviously, the total memory requested for the Oracle instances combined would not exceed the total physical memory available on the server.Yep.
Note: we are currently using Oracle 11g R2 specifically 11.2.0.1 on SolarisPatch to the latest patchset which is 11203.
Aman.... -
Hi everyone I am sahasvat.I am using Acer aspire 5755. It's specification are Intel i3 3rd gen processor, 2Gb DDR 3 ram and 500 GB hdd.I am running on windows 8 OS. I opened about 4 tabs on google chrome and suddenly a blue screen with sad face displayed
an error kernel data inpage and it got restarted and after 5 mins I opened google chrome and again blue screen came and displayed an another error memory management and while restarting on bios I pressed f8 and booted to safe mode as normal
mode didn't open.soon after 20 mins I restarted my lap and I received an error 0xc000021a bios and windows booted.i then pressed power button for a min and switched on my pc but it showed a different error 0xc00000e9. I restarted many times
either 0xc000021a or 0xc00000e9 used to come not an fixed error. I don't have my cd as it got scratches on CD. Plzz help me to get out of this problem.
I can't even access my laptop and I am using my windows phone to post this.Hi everyone I am sahasvat.I am using Acer aspire 5755. It's specification are Intel i3 3rd gen processor, 2Gb DDR 3 ram and 500 GB hdd.I am running on windows 8 OS. I opened about 4 tabs on google chrome and suddenly a blue screen with sad face displayed
an error kernel data inpage and it got restarted and after 5 mins I opened google chrome and again blue screen came and displayed an another error memory management and while restarting on bios I pressed f8 and booted to safe mode as normal
mode didn't open.soon after 20 mins I restarted my lap and I received an error 0xc000021a bios and windows booted.i then pressed power button for a min and switched on my pc but it showed a different error 0xc00000e9. I restarted many times
either 0xc000021a or 0xc00000e9 used to come not an fixed error. I don't have my cd as it got scratches on CD. Plzz help me to get out of this problem.
I can't even access my laptop and I am using my windows phone to post this. -
Data Backup 2.1 and Mac Memory Management?
I'm trialing a backup program called Data Backup 2.1. It keeps versions of my programs, which I need, as often I've had corruptions and have not notice this till a few days after the fact. I've been using Retrospect but read a review that praised Data Backup. The thing I've noticed with it is that although it is very fast, like SuperDuper, it seems to affect my free memory dramatically. I've noticed that it will finish and instead of having say 250 megs of Active memory in use I'll have 700 megs of active memory. Inactive will be low whereas normally its high. Free memory during its backup can drop to 20 megs (I have 1.5 gigs). The free memory, once you start to use your computer seems to recover to around 500 - 700 megs. The one thing I have noticed of concern is that while its running I get pageouts, which I never get and my reading about Mac memory management is you want to avoid pageouts and if you get them you need more memory (for what I'm doing 1.5 gigs should be plenty). I've asked the Data Backup people what's going on and they don't think its something to be concerned about but they said it is probably something to do with the way they are caching.
I'm just wondering - do you think this is something to be concerned about. I'd like to switch from Retrospect as although I know it I'm not sure how committed they are to the Mac market any longer and it is way slower in terms of activities but it does manage memory well. However I don't want to get Data Backup if it is affecting RAM inappropriately.
KerrySynchronize! Pro X will maintain versioned archives, perform full, incremental, and bootable backups both to local and to network devices. I have found that SPX is just about as full-featured as Retrospect with certain limitations. It cannot backup across multiple media (CDs, DVDs, tape), no extensive browser windows like Retrospect, no backups without scanning (as SuperDuper does for its "fast" updating backup.
SPX supports schedules, multiple-item backups (can select individual files and/or folders), extensive backup/synchronize customizations, can run as "root", can auto-mount devices (including network drives), and it's a universal binary.
It's also nearly as expensive as Retrospect but in my opinion it's worth it.
If you want a less costly backup solution without all the features of SPX, but with all the features of SuperDuper (and in my opinion better than SD) then try Deja Vu. Also a universal binary, it supports incremental archives, full, incremental, and bootable backups to local or network drives, supports scheduling and runs as a preference pane.
Finally, for the the truly cheap there are PsyncX and RsyncX - both freeware. They are GUI wrappers around the basic backup and synchronizing tools that are part of Unix (ditto, rsync, and psync.)
Download mentioned software from www.versiontracker.com or www.macupdate.com.
Why reward points?(Quoted from Discussions Terms of Use.)
The reward system helps to increase community participation. When a community member gives you (or another member) a reward for providing helpful advice or a solution to their question, your accumulated points will increase your status level within the community.
Members may reward you with 5 points if they deem that your reply is helpful and 10 points if you post a solution to their issue. Likewise, when you mark a reply as Helpful or Solved in your own created topic, you will be awarding the respondent with the same point values. -
Which table are Demand Management data (MD61) are stored?
Hello guys.
I need to know in which table demand management data from (MD61, 62 and 63) are stored?
Also I need to know if there is some tips/rules to prepare a load from a file to come from Excel spreadsheet?
Is there any BADI or BAPI to load these information directly into the table?
Just for information: My current client doesn't use SAP yet. It's being planned to make the implementation in the middle of the next year...besides they use SAP in other plants around the world...
The main idea is to use MRP from SAP to get the suggestion for purchase requisitions and also planned orders and proceed the production and purchase activities in the legacy....
Please give me some instructions.
Thanks in advance.
Harlen Pinheiro
SAP PP
BrazilHello Harlen
This information is stored on tables PBED and PBIM.
You can use BAPI BAPI_REQUIREMENTS_CREATE to load data to demand management.
Please also take a look on the following document before opening this kind of thread:
Landing page for new users in SAP PP - ERP Manufacturing - Production Planning
BR
Caetano -
The Full Optimization & Lite Optimization Data Manager packages are failing
Hi,
The Full Optimization and Lite Optimization Data Manager packages are failing with the following message "An Error occured while querying for the webfolders path".
Can anyone had similar issue earlier, please let me know how can we rectify the issue.
Thanks,
Vamshi KrishnaHi,
Does the Full Optimize work from the Administration Console directly?
If it's the case, delete the scheduled package for Full Optimize every night (in both eData -> Package Schedule Status and in the Scheduled Tasks on your server Control Panel -> Scheduled Tasks), and then try to reschedule it from scratch.
If it's not solving your problem, I would check if there are some "wrong" records into the FACT and FAC2 tables.
After that, I would also check if the tblAppOptimize is having other values than 0. For all applications, you should have a 0 there.
Hope this will help you..
Best Regards,
Patrick -
What are the Disadvantages of Management Data Warehouse (data collection) ?
Hi All,
We are plan to implement Management Data Warehouse in production servers .
could you please explain the Disadvantages of Management Data Warehouse (data collection) .
Thanks in advance,
Tirumala
>We are plan to implement Management Data Warehouse in production servers
It appears you are referring to production server performance.
BOL: "You can install the management data warehouse on the same instance of SQL Server that runs the data collector. However, if server resources or performance is an issue on the server being monitored, you can install the management data warehouse
on a different computer."
Management Data Warehouse
Kalman Toth Database & OLAP Architect
SQL Server 2014 Database Design
New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014 -
Sort Area Size in Automatic memory management
Hello All
I am aware that the AREASIZE is ignored of the PGA_AGGREGATE_TARGET is set.
So how is it possible that if we incrase the SORT_AREA_SIZE, the performance improves?
does this necessarily mean that the PGA_AGGREGATE_TARGET was not set to a proper value that it instead used the SORT_AREA_SIZE instead?
thanksHi,
If you have set workarea_size_policy=auto then under the automatic PGA memory management mode, sizing of work
areas for all sessions becomes automatic and the AREASIZE parameters are
ignored by all sessions running in that mode.
In auto mode if you change any AREASIZE parameters will be ignored.
If you want to manually handle ARASIZE then turnoff the Automatic pga memory by setting workarea_size_policy=MANUAL and then your changes to parameter will take effect but it's advisable to set pga to automatic.
To check whether your pga is set proper or not check v$pga_target_advice view
SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
ESTD_OVERALLOC_COUNT
FROM V$PGA_TARGET_ADVICE;
This will give you how your pga is set.
chirag -
What are the best memory management actions for the ipad mini retina.
I purchased an ipad mini retina 16 GB. I am concerned with all the capabilities of this device that 16 GB will not be sufficient· So I am asking for suggestions regarding effective memory management techniques·
Thanks. Good common sense advice which I will apply as as much as possible·I dont plan on stroring a lot of videos but with IOS 7 apps are so good that it will be difficult to limit the numbers used. Also I will be using it for photo display but I will move photos regularly to my pc for long term storage
-
RE: (forte-users) memory management
Brenda,
When a partition starts, it reserves the MinimumAllocation. Within this
memory space, objects are created and more and more of this memory is
actually used. When objects are no longer referenced, they remain in memory
and the space they occupy remains unusable.
When the amount of free memory drops below a certain point, the garbage
collector kicks in, which will free the space occopied by all objects that
are no longer referenced.
If garbage collecting can't free enough memory to hold the additional data
loaded into memory, then the partition will request another block of memory,
equal to the IncrementAllocation size. The partition will try to stay within
this new boundary by garbage collecting everytime the available part of this
memory drops below a certain point. If the partition can't free enough
memory, it will again request another block of memory.
This process repeats itself until the partition reaches MaximumAllocation.
If that amount of memory still isn't enough, then the partition crashes.
Instrument ActivePages shows the memory reserved by the partition.
AllocatedPages shows the part of that memory actually used.
AvailablePages shows the part ot that memory which is free.
Note that once memory is requested from the operating system, it's never
released again. Within this memory owned by the partition, the part actually
used will always be smaller. But this part will increase steadily, until the
garbage collecter is started and a part of it is freed again.
There are some settings that determine when the garbage collector is
started, but I'm not sure which ones they are.
The garbage collector can be started from TOOL using
"task.Part.OperatingSystem.RecoverMemory()", but I'm not sure if that will
always actually start the garbage collector.
If you track AllocatedPages of a partition, it's always growing, even if the
partition isn't doing anything. I don't know why.
If you add AllocatedPages and AvailablePages, you shoud get the value of
ActivePages, but you won't. You always get a lower number and sometimes even
considerably lower. I don't know why.
Pascal Rottier
Atos Origin Nederland (BAS/West End User Computing)
Tel. +31 (0)10-2661223
Fax. +31 (0)10-2661199
E-mail: Pascal.Rottiernl.origin-it.com
++++++++++++++++++++++++++++
Philip Morris (Afd. MIS)
Tel. +31 (0)164-295149
Fax. +31 (0)164-294444
E-mail: Rottier.Pascalpmintl.ch
-----Original Message-----
From: Brenda Cumming [mailto:brenda_cummingtranscanada.com]
Sent: Tuesday, January 23, 2001 6:40 PM
To: Forte User group
Subject: (forte-users) memory management
I have been reading up on memory management and the
OperatingSystemAgent, and could use some clarification...
When a partition is brought online, is the ActivePages value set to the
MinimumAllocation value, and expanded as required?
And what is the difference between the ExpandAtPercent and
ContractAtPercent functions?
Thanks in advance,
Brenda
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.comThe Forte runtime is millions of lines of compiled C++ code, packaged into
shared libraries (DLL's) which are a number of megabytes in size. The
space is taken by the application binary, plus the loaded DLL's, plus
whatever the current size of garbage collected memory is.
Forte allocates a garbage-collected heap that must be bigger than the size
of the allocated objects. So if you start with an 8MB heap, you will always
have at least 8MB allocated, no matter what objects you actually
instantiate. See "Memory Issues" in the Forte System Management Guide.
-tdc
Tom Childers
iPlanet Integration Server Engineering
At 10:37 PM 6/11/01 +0200, [email protected] wrote:
Hi all,
I was wondering if anyone had any experience in deploying clients on NT
concerning
the memory use of these client apps.
What is the influence of the various compiler options (optimum
performance, memory use etc)?
We seem to see a lot of the memory is taken by the Forte client apps (seen
in the Task Manager
of NT) in respect to the other native Window apps. For example an
executable of approx 4Mb takes up to
15Mb of memory. When I look at the objects retained in memory after
garbage collection, these are about
2Mb. Where do the other Mb's come from? -
Difference between nio-file-manager and nio-memory-manager
Hi,
what's the difference between nio-file-manager and nio-memory-manager? The documentation doesn't really discuss the differences as far as I know. They both use nio to store memory-mapped files don't they? What are the advantages/disadvantages of both?
When to choose the first one and when the second when storing a large amount of data? Can both be used to query data with the Filter API? Are there size limits on both?
Best regards
JanHi Jan,
The difference is that one uses a memory mapped file and one uses direct nio memory (as part of the memory allocated by the JVM process) to store the data. Both allow storing cache data off heap making it possible to store more data with a single cache node (JVM) without long GC pauses.
If you are using a 32 bit JVM, the JVM process will be limited to a total of ~3GB on Windows and 4GB on Linux/Solaris. This includes heap and off heap memory allocation.
Regarding the size limitations for the nio-file manager Please see the following doc for more information.
With the release of 3.5 there is now the idea of a Partitioned backing map which helps create larger (up to 8GB of capacity) for nio storage. Please refer to the following doc.
Both can be used to query data but it should be noted that the indexes will be stored in heap.
hth,
-Dave -
Hi,
I'm running
Red Hat Linux 5, MySQL and BOXI 3.1
I try and schedule a report in the CMC and the report fails with the error
A database error occured. The database error text is: {Driver Manager} Data source name not found, and no default driver specified. (WIS 10901)
I've went through the steps in the Bus Obj's Documentation outlining how to install the unixODBC (though this doco was for R2 - I'm not certain whether this install is needed for 3.1).
Error WIS 10901 details
Database error: . Contact your administrator or database supplier+
for more information. (WIS 10901)
The database that provides the data to this document has generated an error.
Cause
Details about the error are provided in the section of the message indicated
+by the field code: .
Contact your BusinessObjects administrator with the error message
Action
information or consult the documentation provided by the supplier of the
database.
Any pointers suggestions on how to set up correctly the unixODBC will be looked into.
Thanks for taking the time to view this post.
CheersHi again Aravind,
I hope you're not beginning to wish you had never answered that first question from me since it seems as if I'm now backing up the truck with regard to the entire question. If I'm asking too much from you let me know I don't wanna overstep the line with repect to what should and shouln't be asked in these forums'
anyway I looked in that env.sh script. It was huge (pity I can't attach the file, I've appended it but it makes these threads somewhat lengthy)
DEFAULT_ODBCFILE="$BOBJEDIR"defaultodbc.ini
export DEFAULT_ODBCFILE
+ODBC_HOME="$odbc"+
export ODBC_HOME
also
# setup the mysql env variables
if [ -d "$BOBJEDIR"/mysql ]; then
# mysql env variables
set up the odbc symlink to work around:*
The DataDirect SQL Server ODBC driver on UNIX will not function properly under a*
locale other than "en_US" due to strong dependencies on their locale files.*
MYSQL_UNIX_PORT="$BOBJEDIR"mysql/mysql.sock
export MYSQL_UNIX_PORT
We want to be able to source the config file multiple times.
fi
if [ -d "$BOBJEDIR"/tomcat ]; then
set the JAVA_OPTS for tomcat
I see what you were referring to earlier with
if [ -d "$ODBC_HOME/locale" ]; then
the javascript files are kept here
The machine name
The user name
MYLOCALE=`locale | grep LC_MESSAGES | sed -e 's|LC_MESSAGES="||g' -e 's|"$||g'`The default registry
if [ ! -d "$ODBC_HOME/locale/$MYLOCALE" ]; then
ln -s "$ODBC_HOME/locale/en_US" "$ODBC_HOME/locale/$MYLOCALE"
fi
fi
Again cheers for your help in this matter.
#!/bin/sh
BOBJEDIR="/home/eberwick/BO_3_1/bobje/"
export BOBJEDIR
BODIR="`dirname $BOBJEDIR`/"
export BODIR
DEFAULTFILE="$
check for existence of u flag, if it is there, turn it off.
Set a flag so we don't source the environment more than once
webi config file
ccm.config"
if [ -f "$DEFAULTFILE" ]; then
. "$DEFAULTFILE"
fi
. "${BOBJEDIR?}setup/modify_ko_locale.sh"
SOFTWARE=`uname -s`
OBJECT_MODEL=`grep Platform $BODIR/setup/ProductID.txt | awk '{print $4;}'`
[ -z "$OBJECT_MODEL" ] && OBJECT_MODEL=32
SOFTWAREPATH=`grep SoftwarePath $BODIR/setup/ProductID.txt | awk '{print $3;}'`
U_FLAG=0
if [ X"$SOFTWARE" = "XHP-UX" ]; then
unset the LANG so that we don't get the localized version of 'unlimited' if the localized system messages are installed.
raise the ulimits to max allowed
undo that bug workaround from above
figure out what architecture we're on
now that we're localized, deal with unknown architecture
we include English, as localization may have failed
set the JDK variable
if [ x`echo $- | grep "u"` != "x" ]; then
set +u
U_FLAG=1
fi
fi
if [ x"$BOBJE_ENV_SOURCED" = x ]; then
if [ -f "$setup/boconfig.cfg" ]; then
HKEY_LOCAL_MACHINE="$setup/boconfig.cfg"
export HKEY_LOCAL_MACHINE
fi
BOBJE_ENV_SOURCED="true"
export BOBJE_ENV_SOURCED
BOBJEVERSION="12.0"
export BOBJEVERSION
LANGWAS="$LANG"
unset LANG
LC_ALLWAS="$LC_ALL"
unset LC_ALL
ulimit -Sn `ulimit -Hn` # max file descriptors
ulimit -S -c `ulimit -H -c` # max core file size
ulimit -S -d `ulimit -H -d` # max data segment size
ulimit -S -f `ulimit -H -f` # max file size
ulimit -S -s `ulimit -H -s` # max stack
ulimit -S -t `ulimit -H -t` # max CPU time
LANG="$LANGWAS"; export LANG
unset LANGWAS
LC_ALL="$LC_ALLWAS"; export LC_ALL
unset LC_ALLWAS
case X"$SOFTWARE" in
XLinux) SOFTWARELC="linux"; SHAREDLIBSUFFIX=".so"; CB1LIBSUFFIX="${SHAREDLIBSUFFIX?}.12.0"; CB1SYMLINKLIBSUFFIX="${SHAREDLIBSUFFIX?}.12" ;;
XAIX) SOFTWARELC="aix"; SHAREDLIBSUFFIX=".so"; CB1LIBSUFFIX=".12.0${SHAREDLIBSUFFIX?}"; CB1SYMLINKLIBSUFFIX=".12${SHAREDLIBSUFFIX?}";;
XSunOS) SOFTWARELC="solaris"; SHAREDLIBSUFFIX=".so"; CB1LIBSUFFIX="${SHAREDLIBSUFFIX?}.12.0"; CB1SYMLINKLIBSUFFIX="${SHAREDLIBSUFFIX?}.12";;
XHP-UX)
SOFTWARELC="hpux";
if [ "$SOFTWAREPATH" = "hpux_ia64" ]; then
SHAREDLIBSUFFIX=".so";
else
SHAREDLIBSUFFIX=".sl";
fi
CB1LIBSUFFIX="${SHAREDLIBSUFFIX?}.12.0";
CB1SYMLINKLIBSUFFIX="${SHAREDLIBSUFFIX?}.12";;
esac
export SOFTWAREPATH
export SOFTWARE
export SHAREDLIBSUFFIX
export CB1LIBSUFFIX
export CB1SYMLINKLIBSUFFIX
if [ "$SOFTWAREPATH" = "" ]; then
echo "$UNKNOWNPLATFORM (unknown platform): $SOFTWARE"
exit 1
fi
if [ -d "$BOBJEDIR"/jdk ]; then
JAVA_HOME="$jdk"
export JAVA_HOME
fi
JAVA_OPTS="-d$OBJECT_MODEL -Dbobj.enterprise.home=$
fi
setting AAHOME here so when CAD starts the value is set
enterprise120 -Djava.awt.headless=true -Djava.net.preferIPv4Stack=false"
if [ "$SOFTWARE" = "AIX" -o "$SOFTWARE" = "SunOS" -o "$SOFTWARE" = "Linux" -o "$SOFTWARE" = "HP-UX" ]; then
JAVA_OPTS="$JAVA_OPTS -Xmx1024m -XX:MaxPermSize=256m"
fi
export JAVA_OPTS
if [ -d "$Dashboard_Analytics_120" ]; then
AAHOME="$Dashboard_Analytics_120"
export AAHOME
fi
WCSDIR="$enterprise120/$SOFTWAREPATH/wcs/"
export WCSDIR
WCSBINDIR="$bin/"
export WCSBINDIR
WCSCOMPONENTDIR="$components/"
export WCSCOMPONENTDIR
BINDIR="$enterprise120/$SOFTWAREPATH/"
export BINDIR
LIBDIR="$enterprise120/$SOFTWAREPATH/"
export LIBDIR
PLUGINDIR="$enterprise120/packages/"
export PLUGINDIR
PLUGINDIST="$enterprise120/$SOFTWAREPATH/plugins/"
export PLUGINDIST
LOGDIR="$append the new valuelogging/"
export LOGDIR
if [ x"$BOE_LOGGER_ENVIRONMENT" = x ]; then
BOE_LOGGER_ENVIRONMENT="-loggingPath $LOGDIR"
else
BOE_LOGGER_ENVIRONMENT="$BOE_LOGGER_ENVIRONMENT -loggingPath $LOGDIR"
fi
export BOE_LOGGER_ENVIRONMENT
SCRIPTDIR="$enterprise120/generic/"
export SCRIPTDIR
JAVASCRIPTDIR="$setup/jscripts/"
export JAVASCRIPTDIR
MACHINENAME=`uname -n`
export MACHINENAME
removeUTF8SpecificsForKorean
STRIPPEDMACHINENAME=`hostname | sed -e 's/\..*//'`
export STRIPPEDMACHINENAME
if [ x"$BOBJEUSERNAME" = x ]; then
BOBJEUSERNAME=`id | sed -e "s|).\$||" -e "s|^.(||" `
export BOBJEUSERNAME
fi
restoreUTF8SpecificsForKorean
DEFAULT_REGFILE="$BOBJEDIR"setup/.defaultreg
export DEFAULT_REGFILE
REGFILE="$BOBJEDIR"data/.bobj
export REGFILE
BOE_REGISTRYHOME="$REGFILE/registry"
export BOE_REGISTRYHOME
DEFAULT_ODBCFILE="$BOBJEDIR"defaultodbc.ini
export DEFAULT_ODBCFILE
ODBC_HOME="$odbc"
export ODBC_HOME
the PID file location
PIDDIR="$BOBJEDIR"serverpids
export PIDDIR
SQLRULEDIRECTORY="$LIBDIR"
export SQLRULEDIRECTORY
PATH="$BINDIR:$crpe/xvfb:$PATH"
export PATH
CRPEPATH="$enterprise120/$SOFTWAREPATH/crpe/"
export CRPEPATH
MWHOME="$mw/"
export MWHOME
BOBJEXVFBPATH="$xvfb/"
export BOBJEXVFBPATH
MWUSER_DIRECTORY="$once the crpe is in, we should exit if this file doesn't exist.
Uncomment this to turn off Xvfb security and allow connections from
everyone.
MW_XVFB_AC="1"
export MW_XVFB_AC
Use a seperate .Xauthority file. Comment out this line if you want
to use the user's .Xauthority file for storing the Xvfb authentication
tokens.
registry/"
export MWUSER_DIRECTORY
MWRT_MODE="professional"
export MWRT_MODE
MWREGISTRY=":$MWUSER_DIRECTORY/hklm_$
Mainwin can deadlock unless this is set
Prevents Mainwin from popping up dialogs in some situations, causing a deadlock
variables merged from RAS
XVFB Manager
Environment Variables:
MW_XVFB_EXE = Name of the Xvfb exe.
Default is 'Xvfb'.
MW_XVFB_DAEMON = Name of the XvfbDaemon exe.
Default is 'XvfbDaemon'.
MW_XVFB_DAEMON_PORT = Port number that Xvfb Daemon will listen too.
Default is 5222.
MW_XVFB_DAEMON_HOST = Host on which the XvfbDeamon is running.
Default is Local host.
MW_XVFB_DAEMON_XVFB = Number of Xvfb to run.
Default is '5'.
MW_XVFB_DAEMON_DISPLAY = Starting display number for Xvfb.
Default is '1'.
MW_XVFB_DAEMON_PROFILE = Path to the Security Profile for Xvfb.
Default is 'SecurityProfile'.
MW_XVFB_DAEMON_TRACE = Set to turn on tracing information.
Default is undefined.
MW_XVFB_DAEMON_DIE = Turn off the exit code if no more connections.
Default is undefined.
MW_XVFB_FONT = Locations from which to load font
By this symbol being defined, the checking for a current set display is disabled.
Set to turn on tracing info when defined. Default is undefined.
MW_XVFB_DAEMON_TRACE=defined
export MW_XVFB_DAEMON_TRACE
RAS Home
.bin"
export MWREGISTRY
MWCORE_PRIVATE_DATA="$MWUSER_DIRECTORY/core_data"
export MWCORE_PRIVATE_DATA
if [ -f "$MWHOME"setmwruntime ]; then
. "$MWHOME"setmwruntime
fi
MWNT_OLE_DOCS=true
export MWNT_OLE_DOCS
MWPRINTER_DPI=600
export MWPRINTER_DPI
MWVISUAL_CLASS="TrueColor"
export MWVISUAL_CLASS
if [ "$SOFTWAREPATH" = "hpux_ia64" ]; then
MWTHREAD_STACK="200000"
else
MWTHREAD_STACK="FA000"
fi
export MWTHREAD_STACK
MWFONT_DIR_PATH="$fonts/"
export MWFONT_DIR_PATH
MW_XVFB_DAEMON_FONT="$misc/"
export MW_XVFB_DAEMON_FONT
XAUTHORITY="$xvfb/.Xauthority"
export XAUTHORITY
MWDEBUG_LEVEL=0
export MWDEBUG_LEVEL
MWINVISIBLE_DISPLAY=1
export MWINVISIBLE_DISPLAY
MWNO_SIGCHLD_IGNORE=1
export MWNO_SIGCHLD_IGNORE
MWLOOK=motif
export MWLOOK
MW_XVFB_DAEMON_PROFILE="$BOBJEXVFBPATH/SecurityPolicy"
export MW_XVFB_DAEMON_PROFILE
MW_XVFB_DAEMON_IGNORE_DISPLAY="true"
export MW_XVFB_DAEMON_IGNORE_DISPLAY
if [ "$SOFTWARE" = "HP-UX" ]; then
MW_XVFB_DAEMON_XVFB=10
else
MW_XVFB_DAEMON_XVFB=5
fi
export MW_XVFB_DAEMON_XVFB
MWNO_FILE_LOCKING=true
export MWNO_FILE_LOCKING
MWNO_SIGNAL_CATCHING=true
export MWNO_SIGNAL_CATCHING
RASHOME="$enterprise120/$SOFTWAREPATH/ras/"
export RASHOME
LIBRARYPATH="$LIBDIR:$WCSCOMPONENTDIR:$PLUGINDIST/auth/secEnterprise:$enterprise120/$SOFTWAREPATH/crpe:$:$PLUGINDIST/desktop/CrystalEnterprise.Report:$enterprise120/$SOFTWAREPATH/ras:$
May optionally be set to MALLOCMULTIHEAP=heaps:n[,considersize]
where n is scaled to the number of CPUs (usually 2x).
Setting to MALLOCMULTIHEAP=1 enables system defaults.
setting MALLOCMULTIHEAP to 'considersize' fixes an AIX memory leak and significantly reduces the memory footprint.
env variable to fix the default cpu affinity
env variable to fix dlopen/dlclose behaviour to be more like ELF-based systems
aix thread stack overflow guarding : won't catch if overflow is more than 4k, but better than nothing
aix specific ulimit changes
unset the LANG so that we don't get the localized version of 'unlimited' if the localized system messages are installed.
undo that bug workaround from above
set the aix thread scope to system (1:1)
better core naming for aix 5
Check if memory windows is enabled in the kernal parameters
We will support memory windows, either through the "BOE120_HP_MEMWIN_ID" environment variable,
or through the "BusinessObjectsEnterprise120" memory window key in /etc/services.window
Use memory windows if available on HP-UX.
For both HPUX Itanium and PA-RISC
Reduce the number of arenas from 8 (default) to 1 (min) which solves memory blowup issue.
Enable the thread local cache to compensate.
http://www.docs.hp.com/en/B2355-60130/malloc.3C.html
mysql/lib"
if [ "$SOFTWARE" = "AIX" ]; then
LIBPATH="$LIBRARYPATH:$LIBPATH"
export LIBPATH
if [ x"$MALLOCMULTIHEAP" = x ]; then
CRConfig env variable for DCP
MALLOCMULTIHEAP="considersize"
export MALLOCMULTIHEAP
fi
RT_GRQ=ON
export RT_GRQ
LDR_CNTRL=IGNOREUNLOAD
export LDR_CNTRL
AIXTHREAD_GUARDPAGES=1
export AIXTHREAD_GUARDPAGES
LANGWAS="$LANG"
unset LANG
LC_ALLWAS="$LC_ALL"
unset LC_ALL
ulimit -S -m `ulimit -H -m` # max memory
LANG="$LANGWAS"; export LANG
unset LANGWAS
LC_ALL="$LC_ALLWAS"; export LC_ALL
unset LC_ALLWAS
AIXTHREAD_SCOPE="S"
export AIXTHREAD_SCOPE
Version=`uname -v`
Release=`uname -r`
if [ "$Version" -gt 4 ]; then
CORE_NAMING=ON
export CORE_NAMING
fi
AIXTHREAD_MUTEX_DEBUG=OFF
export AIXTHREAD_MUTEX_DEBUG
AIXTHREAD_COND_DEBUG=OFF
export AIXTHREAD_COND_DEBUG
AIXTHREAD_RWLOCK_DEBUG=OFF
export AIXTHREAD_RWLOCK_DEBUG
elif [ "$SOFTWARE" = "HP-UX" ]; then
if [ "$SOFTWAREPATH" = "hpux_ia64" ]; then
SHLIB_PATH="$LIBRARYPATH:$SHLIB_PATH:$JAVA_HOME/jre/lib/IA64W.0/server"
else
SHLIB_PATH="$LIBRARYPATH:$SHLIB_PATH:$JAVA_HOME/jre/lib/PA_RISC2.0/server"
fi
export SHLIB_PATH
MAX_MEM_WINDOW=`/usr/sbin/kctune | grep max_mem_window | awk '{print $2}'`
if [[ "$MAX_MEM_WINDOW" != "0" ]]; then
if [[ "$BOE120_HP_MEMWIN_ID" = "" && -r "/etc/services.window" && -x "/usr/bin/getmemwindow" ]]; then
BOE120_HP_MEMWIN_ID="`/usr/bin/getmemwindow BusinessObjectsEnterprise120`"
export BOE120_HP_MEMWIN_ID
fi
if [[ -x "/usr/bin/setmemwindow" ]]; then
if [[ "$BOE120_HP_MEMWIN_ID" != "" ]]; then
CE_CMDLINE_PREFIX="/usr/bin/setmemwindow -f -i $BOE120_HP_MEMWIN_ID "
fi
fi
export CE_CMDLINE_PREFIX
fi
export MARENA_OPTS=1:8
export MCACHE_OPTS=100:8:0
elif [ "$SOFTWARE" = "Linux" ]; then
LD_LIBRARY_PATH="$LIBRARYPATH:$perl/lib/5.8.0/i386-linux-thread-multi/CORE:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
else
LD_LIBRARY_PATH="$LIBRARYPATH:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
fi
CRCONFIGFILE="$java/CRConfig.xml"
if [ -f "$CRCONFIGFILE" ]; then
CRConfig11="$CRCONFIGFILE"
export CRConfig11
fi
if [ -d "$ODBC_HOME/locale" ]; then
set up the odbc symlink to work around:
The DataDirect SQL Server ODBC driver on UNIX will not function properly under a
locale other than "en_US" due to strong dependencies on their locale files.
MYLOCALE=`locale | grep LC_MESSAGES | sed -e 's|LC_MESSAGES="||g' -e 's|"$||g'`
if [ ! -d "$ODBC_HOME/locale/$MYLOCALE" ]; then
ln -s "$ODBC_HOME/locale/en_US" "$ODBC_HOME/locale/$MYLOCALE"
fi
fi
This was originally called TMPDIR, but now MainWin supports MW_TMPDIR, so we export MW_TMPDIR
ADAPT00506764 tracks the original issue with TMPDIR/Essbase Connectivity
MySQL now sets its TMPDIR in mysqlstartup.sh
set the tmp dir locally, if the value is not already set.
if [ x"$MW_TMPDIR" = x ]; then
if [ ! -d "$BOBJEDIR"/tmp ]; then
mkdir -p "$BOBJEDIR"/tmp
fi
MW_TMPDIR="$BOBJEDIR"/tmp
export MW_TMPDIR
fi
Comment this out to turn off custom Solaris memory allocator
if [ "$SOFTWAREPATH" = "solaris_sparc" ]; then
LD_PRELOAD="libhoard.so.1"
export LD_PRELOAD
Need to set up 64-bit specific library path so that 64-bit processes will
preload the 64-bit version of the memory allocator, and not the 32-bit version
LD_LIBRARY_PATH_64="$enterprise120/solaris_sparcv9"
export LD_LIBRARY_PATH_64
fi
setup the mysql env variables
if [ -d "$BOBJEDIR"/mysql ]; then
mysql env variables
MYSQL_UNIX_PORT="$BOBJEDIR"mysql/mysql.sock
export MYSQL_UNIX_PORT
fi
call env.sh from sub-directories (presumably from add-on installs)
for dir in "${BOBJEDIR?}/setup"/*
do
if [ -r "${dir?}/env.sh" ]; then
. "${dir?}/env.sh"
fi
done
fi
if [ X"$SOFTWARE" = "XHP-UX" ]; then
check for existence of u_flag, if it is, turn it back on.
if [ "$U_FLAG" = 1 ]; then
set -u
fi
fi -
MAC OS X Lion performance problem - broken memory management
Starting with OS X 10.5 there are evident memory management problems in MAC OS X. The web was already then cluttered with complaints about system slowing down dramatically after some time. Back then i had slower machine, Mac Mini with 1GB RAM, so i (wrongly) concluded that it was due to inferior hardware.
Now i have 2010 MBP, core i7, 8 GB RAM, dual GPU.
Mac os X Snow Leopard was pain, but after migrating to OS X Lion, working some serious stuff on MAC started to be a nightmare.
I finally managed to reproduce the problematic scenario, so i run the test and recorded the screen, into video.
http://www.youtube.com/watch?v=u5wZwZh61_4
I run the tar+bzip command, which is basic unix stuff, on the large amount of picture files, in my Pictures/ folder. Just before start, i run the "purge" command, to delete inactive/cached program data.
You can see on the video that free memory starts to drop very fast, and inactive is constantly rising. If you take a look at "bsdtar" command, it takes only a fragment of RAM, so the problem is not in this process. You cannot say that it is a program memory leak, because then the problem would not be in inactive ram, rather in active/wired.
When the free memory dropped below 100mb, i started some apps, like Safari, iPhoto and MS Word, and you can see in the video, that it takes even minutes to start an app, when normally (when there is free RAM), it would take some 3-5 secs to load.
I run the same scenario and the same commands on my Linux Centos 6 box, no problem there ! Memory usage is some 10-20mb, no problems with cache/buffer.
The memory management must be very broken in Mac OS X !Broken? That's a bit harsh.
Immature? That's perhaps a better explanation.
This paper describes Priority Paging as implemented in Solaris 2.7 back in 1998, and that's essentially what Mac OS X is in need of today:
The problem is that when pages are needed, no differentiation is made between system file cache pages and application pages, and worse, the file cache can actually steal pages needed by applications.
Finally when Dynamic Pager starts up and needs to start swapping things out, it's fairly heavy weight in operation, and causes the UI not responding cursor (aka the spinning beach ball) to appear. -
Memory Management in LabView / DLL
Hi all,
I have a problem concerning the memory management of LabView. If my data is bigger than 1 GB, LabView crashes with an error message "Out of Memory" (As LabView passes Data only by value and not by reference, 1 GB can be easily achieved). My idee is to divide the data structure into smaller structures and stream them from Hard Disk as they are needed. To do so, i have to get access to a DLL which reads this data from disk. As a hard disk is very slow in comparison to RAM, the LabView program gets very slow.
Another approach was to allocate memory in the DLL and pass the pointer back to LabView...like creating a Ramdisk and reading the data from this disk. But memory is allocated in the context of Labview...so LabView crashes because the memory was corrupted by C++. Allocating memory with LabView-h-Files included doesnt help because memory is still allocated in the LabView context. So does anybody know if it's possible to allocate memory in a C++-DLL outside the LabView context, so that i can read my Data with a DLL by passing the pointer to this DLL by LabView? It should work the following way:
-Start LabView program--> allocate an amount of memory for the data, get pointer back to labview
-Work with the program and the data. If some data is needed, a DLL reads from the memory space the pointer is pointing at
-Stop LabView program-->Memory is freed
Remember: The data structure should be used like a global variable in a DLL or like a ramdisk!
Hope you can understand my problem
Thanks in advance
Christian
THINK G!! ;-)
Using LabView 2010 and 2011 on Mac and Win
Programming in Microsoft Visual C++ (Win), XCode (Mac)If you have multiple subvis grabbing 200MB each you might try using the "Request Deallocation" function so that once a vi is done processing it releases the memory.
LabVIEW Help: "When a top-level VI calls a subVI, LabVIEW allocates a data space
of memory in which that subVI runs. When the subVI finishes running, LabVIEW
usually does not deallocate the data space until the top-level VI finishes
running or until the entire application stops, which can result in out-of-memory
conditions and degradation of performance. Use this function to deallocate the
data space immediately after the VI completes execution."
Programming >> Application Control >> Memory Control >> Request Deallocation
I think it first appeared in LabVIEW 7.1.
Message Edited by Troy K on 07-14-2008 09:36 AM
Troy
CLDEach snowflake in an avalanche pleads not guilty. - Stanislaw J. Lec
I haven't failed, I've found 10,000 ways that don't work - Thomas Edison
Beware of the man who won't be bothered with details. - William Feather
The greatest of faults is to be conscious of none. - Thomas Carlyle
Maybe you are looking for
-
Can't play a single downloaded video, with all updated softwares and O.S.
Problem : Attempting to play any downloaded iTunes video content from my iTunes Library prompts me with the following message: Authorize Computer This computer is not authorized to play "any iTunes Video That You Just finished downloading.m4v". If yo
-
having problems backing up itunes libray on dvd 4.7 discs. used to back up at max burn speed. now i keep getting messege burn speed to high. i am now down to burn speed 2x and was able to back up. I really think it is a problem with itunes and not my
-
Broadcast a bluetooth signal to wake a sleeping mac.
My goal is to be able to drive, via screenshare, my mac-mini from my macbook so I don't have to leave a keyboard and mouse with the mini. My problem is that I can't connect to my mac mini if it is asleep. There is a post in Discussions that talked ab
-
ANN: Using Camtasia Studio Footage in Premiere Tutorial Available
Here's the link: Tutorials - Adobe Premiere Pro - Using Camtasia Studio Footage in Premiere -Jeff
-
This is driving me crazy. The router (it came with the apartment) is a Fritz!Box fon Wlan 7240. I would be willing to buy an new airport if that would fix it. But in this country (Germany) you probably can't give things back.