Sharing data stores between BE6000 UCS Servers
Hi,
I have two UCS C220 M3 servers running Business Edition version 9.1.
The servers are in the same data centre and each host a CUCM server, Unity Connection server and Presence Server.
This is great as each application has two servers and so there is survivability it the case that one UCS server fails.
Now the customer wants an Attendant Console and has purchased the Cisco Unified Business Attendant Console which does not have a high availability option.
I have installed this on one UCS server where it is running happily. To provide some resilience what I am considering is turning off the CUBAC VM, copying its files to the other UCS server and adding it to the inventory of that server.
I have tested this by manually copying the files to my PC then uploading them to the second UCS server and it worked ok but took a fair time.
I was wondering if there are any ways to improve/automate this.
My first thought was whether it is possible to share the internal storage of the UCS servers with each other using some clever storage tricks (iSCSI, NFS?).
If not is it supported/recommended to add external storage (say a NAS) to the BE6000 servers and use this as a shared storage area?
If yes to the above is there a way to automate the shutdown, copying and restart of the CUBAC virtual machine on the first UCS server?
My other concern is about leaving a Windows Server VM powered off for an extended period. Is this a very bad thing as it will miss updates or is it something that Windows will deal with gracefully?
I realise that this is a bit wacky and a far better solution would have been to buy an Attendant Console with high availability built in to it.
Thanks
Guys,
Thanks for the responses.
I know what I am asking is far from ideal but this is the position that I have been put in by circumstances.
If I had been involved in the selection of the console CUBAC would not have been selected and I would not be in this position (I would probably have used Fidelus console with queuing provided by CUCM hunt groups).
Unfortunately I have to play the hand I have been dealt hence my questions about sharing data stores etc.
I acknowledge that I am stepping away from best practice but if anyone has input on my original questions I would be grateful.
Thanks
James
Similar Messages
-
Hi,
I found the thread Cannot attach data store shared-memory segment using JDBC (TT0837) but it can't help me out.
I encounter this issue in Windows XP, and application gets connection from jboss data source.
url=jdbc:timesten:direct:dsn=test;uid=test;pwd=test;OraclePWD=test
username=test
password=test
Error information:
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.5.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store
shared-memory segment, error 8 -- file "db.c", lineno 9818, procedure "sbDbConnect"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3295)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3444)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3409)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:813)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1807)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
I am confused that if I use jdbc, there is no such error.
Connection conn = DriverManager.getConnection("url", "username", "password");
Regards,
NestaI think error 8 is
net helpmsg 8
Not enough storage is available to process this command.
If I'm wrong I'm happy to be corrected. If you reduce the PermSize and TempSize of the datastore (just as a test) does this allow JBOSS to load it?
You don't say whether this is 32bit or 64bit Windows. If it's the former, the following information may be helpful.
"Windows manages virtual memory differently than all other OSes. The way Windows sets up memory for DLLs guarantees that the virtual address space of each process is badly fragmented. Other OSes avoid this by densely packing shared libraries.
A TimesTen database is represented as a single contiguous shared segment. So for an application to connect to a database of size n, there must be n bytes of unused contiguous virtual memory in the application's process. Because of the way Windows manages DLLs this is sometimes challenging. You can easily get into a situation where simple applications that use few DLLs (such as ttIsql) can access a database fine, but complicated apps that use many DLLs can not.
As a practical matter this means that TimesTen direct-mode in Windows 32-bit is challenging to use for those with complex applications. For large C/C++ applications one can usually "rebase" DLLs to reduce fragmentation. But for Java based applications this is more challenging.
You can use tools like the free "Process Explorer" to see the used address ranges in your process.
Naturally, 64-bit Windows basically resolves these issues by providing a dramatically larger set of addresses." -
How to Integrate real time data between 2 database servers
How to Integrate real time data between 2 database servers
May 31, 2006 2:45 AM
I have a scenario where the data base (DB2 400) is maintained by AS 400 application and my new website application based on j2ee platform access the same database also but the performance is very low. So we have thought of introducing new oracle data base which will be accessed by j2ee application and all the data from db 400 database will be replicate to oracle data base. In that scenario the only problem is of real time data exchange between 2 databases. How do we achieve that considering both the application As400 and j2ee website application are running in parallel and accessing the same information lying on DB2 400 database. We have to look at transaction management also.
Thanks
Panky
DrClap
Posts:25,835
Registered: 4/30/99 Re: How to Integrate real time data between 2 database servers
May 31, 2006 11:16 AM (reply 1 of 2)
You certainly wouldn't use XML for this.
The process you're looking for is called "replication". Ask your database experts about it.
I predict that after you spend all the money to install Oracle and hire consultants to make it replicate the DB2/400 database, your performance problem will be worse.
panks
Posts:1
Registered: 5/31/06 Re: How to Integrate real time data between 2 database servers
May 31, 2006 11:55 PM (reply 2 of 2)
Yeajh I now that its not a XML solution.
Replication is one of the option but AS400 application which uses DB2/400 DB is highly loaded and proposed website also uses the same database for retrieval and updation purpose.All the inventory is maintained in the DB2/400 database so I have thought of introducing new oracle database which will be accessed by new website and it will have all the relevant tables structure along with data from DB2/400 application. Now whenever there is a order placement from new website then first it should update the oracle database and then this data shuold also migrate to db2/400 application at real time so that the main inventory which is lying on db2/400 should be updated on real time basis because order placement is aslo possible from As400 application. So the user from As400 application should not get the wrong data.
Is it possible to use MQ products??
-PankyHi,
the answer to your question is not easy. Synchronization or integration or replication data between 2 (or more) database servers is very complicated task, even though it doesn't look like.
Firstly I would recommend to create good analysis regarding data flow.
Important things are:
1) what is primary side for data creation. In other words on which side - DB2 or Oracle - are primary data (they are created here) and on which side are secondary data (just copies)
2) on which side are data changed - only in DB2 side or only on Oracle side or on both sides
3) Are there data which are changed on both side concurrently? If so how should be conflicts solved?
4) What does it mean "real time"? Is it up to 1 ms or 1s or 1 min or 1 hour?
5) What should be done when replication will not work? I mean replication crash etc.
BTW. The word "change" above means INSERT, UPDATE, DELETE commands.
Analysis should be done for every column in every table. When analysis is ready you can select the best system for your solution (Oracle replication, Sybase replication server, MQ, EJB or your proprietary solution). Without analysis it will be IMHO gunshot into the dark. -
Getting Error : Cannot attach data store shared-memory segment,
HI Team,
I am trying to integrate Timesten IMDB in my application.
Machine details
Windows 2003, 32 bit, 4GB RAM.
IMDB DB details
Permanent size 500MB, temp size 40MB.
If I try to connect to database using ttisql it get connected. But If I try to connect in my Java application I get following exception.
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.3.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 8 -- file "db.c", lineno 7966, procedure "sbDbCreate"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3269)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3418)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3383)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:787)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1800)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:207)
Maximum permanent size that works with Java application is 100MB. But it would not be enough for our use.
Could anybody let me know the way to resolve/reason for getting this error? Any response would be appreciated.
Thanks in Advance,
Regards,
atulThis is a very common problem on 32-bit Windows. A TimesTen datastore is a single region of 'shared memory' allocated as a shared mapping from the paging file. In 'direct mode', when the application process(in your case either ttIsql or the JVM) 'connects' to the datastore the datastore memory region is mapped into the process address space. In order for this to happen it is necessary for there to be a free region in the process adddress space that is at least the size of the datastore. This region must be contiguous (i.e. a single region). Unfortunately, the process memory map in 32-bit Windows is typically highly fragmented and the more DLLs that a process uses the worse this is. Also, JVMs typically use a lot of memory, depending on configuration.
Your options to solve this are really limited to:
1. Significantly reduce the memory used by the JVM (may not be possible).
2. Use a local client/server connection from Java instead of a direct mode connection. To minismise the performance overhead make sure you use the optimised ShmIpc connectivity rather than TCP/IP. Even with this there is likely to be a >50% reduction in performance compared to direct mode.
3. Switch to 64-bit Windows, 64-bit TimesTen and 64-bit Java. Even without adding any extra memory to your machine thsi will very likely fix the problem.
Option (3) is by far the best one.
Regards,
Chris -
Cannot attach data store shared-memory segment using JDBC (TT0837)
I'm currently evaluating TimesTen during which I've encountered some problems.
All of the sudden my small Java app fails to connect to the TT data source.
Though I can still connect to the data source using ttisql.
Everything worked without problems until I started poking around in the ODBC administrator (Windows 2K).
I wanted to increase permanent data size so I changed some of the parameters.
After that my Java app fails to connect with the following message:
DriverManager.getConnection("jdbc:timesten:direct:dsn=rundata_tt60;OverWrite=0;threadsafe=1;durablecommits=0")
trying driver[className=com.timesten.jdbc.TimesTenDriver,com.timesten.jdbc.TimesTenDriver@addbf1]
SQLException: SQLState(08001) vendor code(837)
java.sql.SQLException: [TimesTen][TimesTen 6.0.4 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 8 -- file "db.c", lineno 8846, procedure "sbDbConnect()"
The TT manual hasn't really provided any good explanation what the error code means.
Obviusly I'v already tried restoring the original ODBC parameters without any luck.
Ideas..anyone?
/PeterPeter,
Not sure if you have resolved this issue or not. In any case, here are some information to look into.
- On Windows 32-bit, the allocation of shared data segment doesn't work the same way like on Unix and Linux. As a result, the maximum TimesTen database size one can allocate is much smaller on the Windows platform than on other platforms.
- Windows error 8 means ERROR_NOT_ENOUGH_MEMORY: not enough storage is available to process this command.
- TimesTen TT0837 says the system was unable to attach a shared memory segment during a data store creation or data store connection operation.
- What was the largest successful perm-size and temp-size you used when allocating the TimesTen database?
* One explanation for why you were able to connect using ttIsql is that it doesn't use much of the DLLs, whereas your Java application typically has a lot more DLLs.
* As a troubleshooting step, you can try reduce your Temp-size to a very small size and just see if you can connect to the data store. Eventually, you may need to reduce your perm-size to get Windows to fit the shared data segment in the process space.
By the way the TimesTen documentation has been modified to document this error as follows:
Unable to attach to a shared memory segment during a data store creation or data store connection operation.
You will receive this error if a process cannot attach to the shared memory segment for the data store.
On UNIX or Linux systems, the shmat call can fail due to one of:
- The application does not have access to the shared memory segment. In this case the system error code is EACCESS.
- The system cannot allocate memory to keep track of the allocation, or there is not enough data space to fit the segment. In this case the system error code is ENOMEM.
- The attach exceeds the system limit on the number of shared memory segments for the process. In this case the system error code is EMFILE.
It is possible that some UNIX or Linux systems will have additional possible causes for the error. The shmat man page lists the possibilities.
On Windows systems, the error could occur because of one of these reasons:
- Access denied
- The system has no handles available.
- The segment cannot be fit into the data section
Hope this helps.
-scheung -
Cannot create data store shared-memory segment error
Hi,
Here is some background information:
[ttadmin@timesten-la-p1 ~]$ ttversion
TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
Instance admin: ttadmin
Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
Group owner: ttadmin
Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
PL/SQL enabled.
[ttadmin@timesten-la-p1 ~]$ uname -a
Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
68719476736
[ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
MemTotal: 148426936 kB
MemFree: 116542072 kB
Buffers: 465800 kB
Cached: 30228196 kB
SwapCached: 0 kB
Active: 5739276 kB
Inactive: 25119448 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148426936 kB
LowFree: 116542072 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 164740 kB
Mapped: 39188 kB
Slab: 970548 kB
PageTables: 10428 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 90990676 kB
Committed_AS: 615028 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 274804 kB
VmallocChunk: 34359462519 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
extract from sys.odbc.ini
[cachealone2]
Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
DataStore=/u02/timesten/datastore/cachealone2/cachealone2
PermSize=14336
OracleNetServiceName=ttdev
DatabaseCharacterset=WE8ISO8859P1
ConnectionCharacterSet=WE8ISO8859P1
[ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
SwapTotal: 16777208 kB
Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
[ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
Copyright (c) 1996-2009, Oracle. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=cachealone2";
836: Cannot create data store shared-memory segment, error 28
703: Subdaemon connect to data store failed with error TT836
The command failed.
Done.
I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
Regards,
RajThose parameters look ok for a 100GB shared memory segment. Also check the following:
ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
To view the current setting run the OS command
$ ulimit -l
and to set it to a value dynamically use
$ ulimit -l <value>.
Once changed you need to restart the TimesTen master daemon for the change to be picked up.
$ ttDaemonAdmin -restart
Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
-linuxLargePageAlignment 2
So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
PermSize+TempSize+LogBufMB+64MB Overhead
For example consider a TimesTen database of size:
PermSize=250000 (unit is MB)
TempSize=100000
LogBufMB=1024
Total Memory = 250000+100000+1024+64 = 351088MB
The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
351088/2 = 175544
As user root edit the /etc/sysctl.conf file
Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
vm.nr_hugepages=175544
Add/modify vm.hugetlb_shm_group = 600
This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
$ id
$ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
As user root edit the /etc/security/limits.conf file
Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
oracle hard memlock 359514112
oracle soft memlock 359514112
THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
$ sysctl -p
Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
Check Hugepages has been setup correctly, look for Hugepages_Total
$ cat /proc/meminfo | grep Huge
Based on the example values above you would see the following:
HugePages_Total: 175544
HugePages_Free: 175544 -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
Redhat: TT0837: Cannot attach data store shared-memory segment, error 12
Customer has two systems, one Solaris and one Linux. We have six DSNs with one dsn PermSize at 1.85G. Both OS systems are 32-bit. After migrating from TT6.0 to 11.2, I can not get replication working on the Linux system for the 1.85G dsn. The Solaris system is working correctly. I've been able to duplicate the issue in out lab also. If I drop the PermSize down to 1.0G, replication is started. I've tried changing multiple parameters including setting up HugePages.
What else could I be missing? Decreasing the PermSize is not an option for this customer. Going to a full 64-bit system is on our development roadmap but is at least a year away due to other commitments.
This is my current linux lab configuration.
ttStatus output for the failed Subscriber DSN and a working DynamicDB DSN. As you can see, the policy is set to "Always" but it has no Subdaemon or Replication processes running.
Data store /space/Database/db/Subscriber
There are no connections to the data store
Replication policy : Always
Replication agent is running.
Cache Agent policy : Manual
Data store /space/Database/db/DynamicDB
There are 14 connections to the data store
Shared Memory KEY 0x5602000c ID 1826586625 (LARGE PAGES, LOCKED)
Type PID Context Connection Name ConnID
Replication 88135 0x56700698 LOGFORCE 4
Replication 88135 0x56800468 REPHOLD 3
Replication 88135 0x56900468 TRANSMITTER 5
Replication 88135 0x56a00468 REPLISTENER 2
Subdaemon 86329 0x08472788 Manager 2032
Subdaemon 86329 0x084c5290 Rollback 2033
Subdaemon 86329 0xd1900468 Deadlock Detector 2037
Subdaemon 86329 0xd1a00468 Flusher 2036
Subdaemon 86329 0xd1b00468 HistGC 2039
Subdaemon 86329 0xd1c00468 Log Marker 2038
Subdaemon 86329 0xd1d00468 AsyncMV 2041
Subdaemon 86329 0xd1e00468 Monitor 2034
Subdaemon 86329 0xd2000468 Aging 2040
Subdaemon 86329 0xd2200468 Checkpoint 2035
Replication policy : Always
Replication agent is running.
Cache Agent policy : Manual
Summary of Perm and Temp Sizes of each system.
PermSize=100
TempSize=50
PermSize=100
TempSize=50
PermSize=64
TempSize=32
PermSize=1850 => Subscriber
TempSize=35 => Subscriber
PermSize=64
TempSize=32
PermSize=200
TempSize=75
[SubscriberDir]
Driver=/opt/SANTone/msc/active/TimesTen/lib/libtten.so
DataStore=/Database/db/Subscriber
AutoCreate=0
DurableCommits=0
ExclAccess=0
LockLevel=0
PermWarnThreshold=80
TempWarnThreshold=80
PermSize=1850
TempSize=35
ThreadSafe=1
WaitForConnect=1
Preallocate=1
MemoryLock=3
###MemoryLock=0
SMPOptLevel=1
Connections=64
CkptFrequency=300
DatabaseCharacterSet=TIMESTEN8
TypeMode=1
DuplicateBindMode=1
msclab3201% cat ttendaemon.options
-supportlog /var/ttLog/ttsupport.log
-maxsupportlogsize 500000000
-userlog /var/ttLog/userlog
-maxuserlogsize 100000000
-insecure-backwards-compat
-verbose
-minsubs 12
-maxsubs 60
-server 16002
-enableIPv6
-linuxLargePageAlignment 2
msclab3201# cat /proc/meminfo
MemTotal: 66002344 kB
MemFree: 40254188 kB
Buffers: 474104 kB
Cached: 19753148 kB
SwapCached: 0 kB
HugePages_Total:
2000
HugePages_Free:
2000
HugePages_Rsvd:
0
HugePages_Surp:
0
Hugepagesize:
2048 kB
## Before loading Subscriber Dsn
msclab3201# ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xbc0101d6 1703411712 ttadmin 660 1048576 1
0x79010649 24444930 root 666 404 0
## After loading Subscriber Dsn
msclab3201# ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xbc0101d6 1703411712 ttadmin 660 1048576 2
0x7f020012 1825964033 ttadmin 660 236978176 2
0x79010649 24444930 root 666 404 0
msclab3201#
msclab3201# sysctl -a | grep huge
vm.nr_hugepages = 2000
vm.nr_hugepages_mempolicy = 2000The size of these databases is very close to the limit for 32-bit systems and you are almost certainly running into address space issues given that 11.2 has a slightly larger footprint than 6.0. 32-bit is really 'legacy' nowadays and you should move to a 64-bit platform as soon as you are able. That will solve your problems. I do not think there is any other solution (other than reducing the size of the database).
Chris -
Slow data transfer between servers across ACE
Hello all,
We have been facing slowness in SFTP file transfer. All three tiers are in different Vlan on ACE module.
App Server1 to DB Servers (Data Transfer is slow 32KB)
App Server2 to DB Server (Data Transfer is OK)
DB Servers to Web Servers (Data Transfer is OK)
TestPC to App Server1 (Data Transfer is slow 32KB)
TestPC to App Server2 (Data Transfer is OK)
I have requested the customer on results of some other test cases, but before i start on ace, just checking if someone has faced similar problem.
Regards,
AkhtarSee this thread.
http://community.bt.com/t5/Other-BB-Queries/HomeHub-3-LAN-speeds-only-10Mb/m-p/238589/highlight/true...
There are some useful help pages here, for BT Broadband customers only, on my personal website.
BT Broadband customers - help with broadband, WiFi, networking, e-mail and phones. -
838 Cannot get data store shared memory segment error in Timesten
Hi Chris,
This is shalini. I have mailed u for last two days reg this issue. You asked me about few details. Here the answer is,
1, Have u modified anything after timesten installation? - No. I didnt change anything .
2, What are the three values under physical memory? - Total - 2036 MB ,Cached - 1680 MB ,Free - 12 MB
3, ttmesg.log and tterrors.log? -
tterrors.log::
12:48:58.83 Warn: : 1016: 3972 Connecting process reports creation failure
ttmesg.log::
12:48:58.77 Info: : 1016: maind got #12.14, hello: pid=3972 type=library payload=%00%00%00%00 protocolID=TimesTen 11.2.1.3.0.tt1121_32 ident=%00%00%00%00
12:48:58.77 Info: : 1016: maind: done with request #12.14
12:48:58.77 Info: : 1016: maind got #12.15 from 3972, connect: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 user=InstallAdmin pass= dbdev= logdev= logdir=c:\timesten\tt1121_32\logs\my_ttdb grpname= access=%00%00%00%00 persist=%00%00%00%00 flags=@%00%00%01 newpermsz=%00%00%00%02 newtempsz=%00%00%00%02 newpermthresh=Z%00%00%00 newtempthresh=Z%00%00%00 newlogbufsz=%00%00%00%02 newsgasize=%00%00%00%02 newsgaaddr=%00%00%00%00 autorefreshType=%ff%ff%ff%ff logbufparallelism=%00%00%00%00 logflushmethod=%00%00%00%00 logmarkerinterval=%00%00%00%00 connections=%03%00%00%00 control1=%00%00%00%00 control2=%00%00%00%00 control3=%00%00%00%00 ckptrate=%06%00%00%00 connflags=%00%00%00%00 newlogfilesz=%00%00@%01 skiprestorecheck=%00%00%00%00 realuser=InstallAdmin conn_name=my_ttdb ckptfrequency=X%02%00%00 ckptlogvolume=%14%00%00%00 recoverythreads=%03%00%00%00 reqid=* plsql=%ff%ff%ff%ff receiverThreads=%00%00%00%00
12:48:58.77 Info: : 1016: 3972 0266E900: Connect c:\timesten\tt1121_32\data\my_ttdb\my_ttdb a=0x0 f=0x1000040
12:48:58.77 Info: : 1016: permsize=33554432 tempsize=33554432
12:48:58.77 Info: : 1016: logbuffersize=33554432 logfilesize=20971520
12:48:58.77 Info: : 1016: permwarnthresh=90 tempwarnthresh=90 logflushmethod=0 connections=3
12:48:58.77 Info: : 1016: ckptfrequency=600 ckptlogvolume=20 conn_name=my_ttdb
12:48:58.77 Info: : 1016: recoverythreads=3 logbufparallelism=0
12:48:58.77 Info: : 1016: plsql=0 sgasize=33554432 sgaaddr=0x00000000
12:48:58.77 Info: : 1016: control1=0 control2=0 control3=0
12:48:58.79 Info: : 1016: ckptrate=6 receiverThreads=0
12:48:58.79 Info: : 1016: 3972 0266E900: No such data store
12:48:58.79 Info: : 1016: daDbConnect failed
12:48:58.79 Info: : 1016: return 1 833 'no such data store!' arg1='c:\timesten\tt1121_32\data\my_ttdb\my_ttdb' arg1type='S' arg2='' arg2type='S'
12:48:58.79 Info: : 1016: maind: done with request #12.15
12:48:58.79 Info: : 1016: maind got #12.16 from 3972, create: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 user=InstallAdmin pass= dbdev= logdev= logdir=c:\timesten\tt1121_32\logs\my_ttdb grpname= persist=%00%00%00%00 access=%00%00%00%00 flags=@%00%00%01 permsize=%00%00%00%02 tempsize=%00%00%00%02 permthresh=Z%00%00%00 tempthresh=Z%00%00%00 logbufsize=%00 %00%02 logfilesize=%00%00@%01 shmsize=8%f4%c9%06 sgasize=%00%00%00%02 sgaaddr=%00%00%00%00 autorefreshType=%01%00%00%00 logbufparallelism=%00%00%00%00 logflushmethod=%00%00%00%00 logmarkerinterval=%00%00%00%00 connections=%03%00%00%00 control1=%00%00%00%00 control2=%00%00%00%00 control3=%00%00%00%00 ckptrate=%06%00%00%00 connflags=%00%00%00%00 inrestore=%00%00%00%00 realuser=InstallAdmin conn_name=my_ttdb ckptfrequency=X%02%00%00 ckptlogvolume=%14%00%00%00 recoverythreads=%03%00%00%00 reqid=* plsql=%00%00%00%00 receiverThreads=%00%00%00%00
12:48:58.79 Info: : 1016: 3972 0266E900: Create c:\timesten\tt1121_32\data\my_ttdb\my_ttdb p=0x0 a=0x0 f=0x1000040
12:48:58.79 Info: : 1016: permsize=33554432 tempsize=33554432
12:48:58.79 Info: : 1016: logbuffersize=33562624 logfilesize=20971520
12:48:58.80 Info: : 1016: shmsize=113898552
12:48:58.80 Info: : 1016: plsql=0 sgasize=33554432 sgaaddr=0x00000000
12:48:58.80 Info: : 1016: permwarnthresh=90 tempwarnthresh=90 logflushmethod=0 connections=3
12:48:58.80 Info: : 1016: ckptfrequency=600 ckptlogvolume=20 conn_name=my_ttdb
12:48:58.80 Info: : 1016: recoverythreads=3 logbufparallelism=0
12:48:58.80 Info: : 1016: control1=0 control2=0 control3=0
12:48:58.80 Info: : 1016: ckptrate=6, receiverThreads=1
12:48:58.80 Info: : 1016: creating DBI structure, marking in flux for create by 3972
12:48:58.82 Info: : 1016: daDbCreate: about to call createShmAndSem, trashed=-1, panicked=-1, shmSeq=1, name c:\timesten\tt1121_32\data\my_ttdb\my_ttdb
12:48:58.82 Info: : 1016: marking in flux for create by 3972
12:48:58.82 Info: : 1016: create.c:338: Mark in-flux (now reason 1=create pid 3972 nwaiters 0 ds c:\timesten\tt1121_32\data\my_ttdb\my_ttdb) (was reason 1)
12:48:58.82 Info: : 1016: maind: done with request #12.16
12:48:58.83 Info: : 1016: maind got #12.17 from 3972, create complete: name=c:\timesten\tt1121_32\data\my_ttdb\my_ttdb context= 266e900 connid=%00%08%00%00 success=N
12:48:58.83 Info: : 1016: 3972 0266E900: CreateComplete N c:\timesten\tt1121_32\data\my_ttdb\my_ttdb
12:48:58.83 Warn: : 1016: 3972 Connecting process reports creation failure
12:48:58.83 Info: : 1016: About to destroy SHM 560
12:48:58.83 Info: : 1016: maind: done with request #12.17
12:48:58.83 Info: : 1016: maind 12: socket closed, calling recovery (last cmd was 17)
12:48:58.85 Info: : 1016: Starting daRecovery for 3972
12:48:58.85 Info: : 1016: 3972 ------------------: process exited
12:48:58.85 Info: : 1016: Finished daRecovery for pid 3972.
I think "Connecting process reports creation failure" is the error shown.
4, DSN Attributes? -
earlier, i used my_ttdb which is the SYSTEM DSN , and i tried by creating a user DSN, still it is not working. I will give the my_ttdb dsn attributes.
I am unable to attach the screenshots in this message. Is there anyway to attach it. I should not send the reply through mail from my company, so only sent this message. You can reply to my mailbox id.
Chris or Anybody can please reply to this below message.
I am having a table called Employee. I have created a view called emp_view.
create view emp_view as
select /*+ INDEX (Employee emp_no) */
emp_no,emp_name
from Employee;
we have another index on Employee table called emp_name.
I need to use the emp_name index in emp_view like below.
select /*+ INDEX (<from employee tables> emp_name) */
emp_no
from emp_view
where emp_name='SSS';
in the index i tried /*+ INDEX (emp_view.Employee emp_name) */ But it is still taking the index used in emp_view view ie emp_no.. Anyone can u please help me to resolve this issue.
Edited by: user12154813 on Nov 3, 2009 4:21 AMDSN Attributes for the two paths u gave are mentioned below.
HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\mt_ttdb:
(Default) - (value not set)
AssertDlg - 1
AutoCreate -1
AutorefreshType -1
CacheGridEnable -0
CacheGridMsgWait -60
CatchupOverride -0
CkptFrequency -600
CkptLogVolume -20
CkptRate -6
ConnectionCharacterSet -US7ASCII
Connections -3
Control1 -0
Control2 -0
Control3 -0
DatabaseCharacterSet -US7ASCII
DataStore - C:\TimesTen\tt1121_32\Data\my_ttdb\my_ttdb
DDLCommitBehavior -0
Description - My Timesten Data store
Diagnostics -1
Driver - C:\TimesTen\tt1121_32\bin\ttdv1121.dll
DuplicateBindMode -0
DurableCommits -0
DynamicLoadEnable -1
DynamicLoadErrorMode -0
ExclAccess -0
ForceConnect -0
InRestore -0
Internal1 -0
Isolation -1
LockLevel -0
LockWait -10.0
LogAutoTruncate -1
LogBuffSize -0
LogBufMB -32
LogBufParallelism -0
LogDir -C:\TimesTen\tt1121_32\Logs\my_ttdb
LogFileSize -20
LogFlushMethod -0
Logging -1
LogMarkerInterval -0
LogPurge -1
MatchLogOpts -0
MaxConnsPerServer -4
NLS_LENGTH_SEMANTICS -BYTE
NLS_NCHAR_CONV_EXCP -0
NLS_SORT -BINARY
NoConnect -0
Overwrite -0
PassThrough -0
PermSize -32
PermWarnThreshold -0
PLSQL - value (-1)
PLSQL_CODE_TYPE -INTERPRETED
PLSQL_CONN_MEM_LIMIT - 100
PLSQL_MEMORY_SIZE - 32
PLSQL_OPTIMIZE_LEVEL - 2
PLSQL_TIMEOUT - 30
Preallocate -0
PrivateCommands -0
QueryThreshold - value (-1)
RACCallback -1
ReadOnly -0
ReceiverThreads -0
RecoveryThreads -3
SkipRestoreCheck -0
SMPOptLevel - value(-1)
SQLQueryTimeout - value(-1)
Temporary -0
TempSize -32
TempWarnThreshold - 0
ThreadSafe -1
TransparentLoad- value(-1)
TypeMode -0
WaitForConnect -1
XAConnection -0
HKEY_CURRENT_USER\SOFTWARE\ODBC\ODBC.INI\my_test:
existing values changed from the above dsn attributes:
TypeMode - value(-1)
RecoveryThreads -0
MaxConnsPerServer -5
LogFileSize -4
LogDir -C:\TimesTen\tt1121_32\Logs\my_test
DataStore - C:\TimesTen\tt1121_32\data\my_test\my_test
DatabaseCharacterSet -AL32UTF8
ConnectionCharacterSet -AL32UTF8
CkptFrequency - value(-1)
CkptLogVolume - value(-1)
CkptRate -value(-1)
Connections -0
CacheGridEnable -1
newly added values:
ServerStackSize - 10
ServersPerDSN -2
other attributes are same for both the dsn's.
Please reply as soon as possible.
Thanks,
Shalini.
Edited by: user12154813 on Nov 3, 2009 11:34 PM
Edited by: user12154813 on Nov 3, 2009 11:37 PM -
Working with multiple users and computers, but shared data
Sorry if this is posted in a poor place, I'm not sure where the best place is. This is sort of a general questions.
For a long time, my wife and I have had either one computer, or two machines but one has definitely been just a terminal. We've basically set up all of our data to be one one primary machine, and if we want to view/edit that data we have to use that machine.
We just got a new MacBook Pro and I would like to be able to use two machines as equals. Sadly, this idea of multiple computers, with two users and some shared data is really giving me difficulty. I was wondering if anyone has any suggestions on how to best manage things like:
Synchronizing portions of our contact list (We share about 50% of the combined library -- we don't have to share all though).
How to manage iPhoto so that we can each have access to the photos. As an added difficulty (or maybe this is easier?) my Wife just wants to have access to the pictures for viewing and sharing on Facebook/Picassa/etc. I am the only one who wants to edit, correct and cull our library. That said, I always edit when I first put the data on the machine, and almost never again; so it would be fine to have one (or both accounts) set up as view only for the iPhoto data.
How to manage iTunes so that we can each have access to the music. As a super awesome bonus, it would be great if we could have three libraries: His, Hers and Shared. Maybe as much as 30% of our music library is similar, the rest just gets in the way.
What is the best solution people have found for calendars? (I'm thinking two separate calendars, and we each subscribe to each others iCal feed)
Mail.app and bookmark synching is not really a problem for us.
Two extra points:
* One machine is portable, and the other isn't. Ideally, when the laptop is out of the house, both machines should still have reasonable access to the shared data. That is: Just dumping things in the shared folder won't work because when the laptop is out of the house we will be disconnected from the source data.
* We just got a second iPhone. This means that both of us will be taking photos/video separately and trying to synch back to the master data store.
* Basically, I'm trying to minimize data duplication as much as possible, and just synchronize the systems to each other.
Thanks a ton in advance. If anyone has any suggestions at all, I would love to hear them. Including "This is in the wrong forum, go ask here instead..."So you have a desktop Mac and a laptop Mac, right? Two user accounts (and a third admin account) on each computer, right?
I profess that I haven't tried this, but here is how I would approach your problem:
Sharing Music and Photos between multiple user accounts on the same computer:
See if http://forums.macrumors.com/showthread.php?t=194992 and http://forums.macrumors.com/showthread.php?t=510993 provide any useful information to assist you in this endeavor.
Sharing across multiple computers:
Turn on file sharing on the Desktop (System Preferences > Sharing). Now you can mount the Desktop as an external drive on the laptop's Desktop. Copy the music and photo folders across. Will take awhile to do the first time. Then, for future use, get a copy of the donationware CarbonCopyCloner or equivalent. You can use CCC to selectively sync specific folders from one computer to the other. There may be a hassle with digital copyright issues on music and movies, though.
Calendars:
As you have suggested yourself, publishing yours and subscribing to hers is probably the best way to do it, on the same computer. Across computers, syncing with CCC or equivalent would probably be the way to go. -
Mixing BE6000 UCS Server and "normal" UCS server in the same deployment
Hello,
I have been handed a project which has one high density BE6000 UCS server and a separate UCS C220 M3 server. The latter server was included to host a MediaSense call recording system but this will only use 2 of the available 8 vCPUs on the UCS C220 M3 server.
The total number of users is 200.
I want to implement a resilient system and so would like deploy two servers in a cluster for each of the following applications:
CUCM 10.5
Unity Connection 10.5
IM & Presence 10.5
As well as these applications there will be UCCX 10.6 and Cisco Unified Attendant Console Advanced (10.5) but these will be deployed as single servers.
Looking at the UCS servers they have capacity for me to split the CUCM/CUC/IMP clusters between them.
I cannot see any technical reason why this will not work but do not want to be caught by any Cisco support policies.
If I were to implement the system in this way would there be any issues with the deployment or getting support from TAC.
The separate UCS C220 M3 server has 8 x 8GB RAM (64 total) and 8 x 300GB HDD plus a quad Ethernet card.James first of all I am not sure I understand your query in detail. Do you mean you have two UCS220 M3 servers? and one of them is currently running BE6000?
Having said that, the key here is to carefully plan your deployment against the capacity of the servers you have,.
Eg..Deploying 200 Users, using the ff OVA
UCS 220 M3 server 1: (using default TRC ie 8vCPU with 8GB per vCPU)
Publisher (2500 cucm OVA): 1vCPU (6GB RAM), 80GB HD
IMP-Publisher (1,000 OVA) :1vCPU(2GB RAM), 80GB HD
CUC-publisher (1,000 users OVA) :2vCPU(6GB)--NB 1vCPU is reserved for ESXI, 160GB HD
UCCX-Master: (300 agent OVA): 2 vCPU (8GB) and 292GB HDD
With this placement you have a total of 22GB RAM and 612HDD used up
A break down is shown below..
Server
Server Name
C220 M3S TRC#1 (Medium)
C220 M3S TRC#1 (Medium)
Application Short Name
Application Long Name
Release
VM Name
vCPU
vRAM
vDisk
CUCM
Unified Communications Manager Release
10.x
CallCtrl: 2,500 users
1
4
80
IM&P
IM & Presence Release
10.x
1,000 users
1
2
80
CUC
Unity Connection Release
10.0
1,000 users
1
4
160
ESXi
Unity Connection
ESXi
1*
CUCCX
Cisco Unified Contact Center Express / Unified IP IVR Release
10.x
Main: 300 agents
2
8
292
ESXi
VMware vSphere ESXi
5.5
4**
* Note: This is a 1 physical CPU core per host regardless of the number of Cisco Unity Connection (CUC) VMs.
** Note: This is 4GB physical RAM per host. -
Multiple Shared Content Stores in App-V 5
Just wanted to see if it's possible to have a Shared Content Store (SCS) per site when we have an RDS Solution across two connected data centres?
We are planning on implementing the App-V 5.0 client for RDS on the Session Hosts (2012 R2) at both sites and would like the Session Hosts to contact their local SCS, rather than going across the WAN link between sites.
We'll be using System Centre 2012 to provision the content, rather than a dedicated App-V Infrastructure.
Cheers for now
RussellHello,
The App-V Client will use the source, defined either by the location from where you added the package or packagesourceroot, as the location for SCS.
I can't say that PackageSourceRoot is supported when used in collaboration with SCCM, but you could test it.
See this reference for PackageSourceRoot;
http://technet.microsoft.com/en-us/library/jj687745.aspx
Nicke Källén | The Knack| Twitter:
@Znackattack -
Request Scope for portlets? Sharing data tutorial does not work for portlet
I created 2 identical applications, one that was a regular JSF web app, and one a JSR168 portlet. By following the "Sharing Data Between Two Pages" tutorial, I can get this technique to work without a problem in the web app, but the portlet (deployed locally using the pluto server) does not work. What is the difference between a regular app and a portlet app?
This shows that portlets apparently handle scope much differently than other applications, so any information relating to this would be helpful to pass along.Portlet Life Cycle Differences
A portlet page is the same as a web application page in the Creator 2 application
model with differences only in the page life cycle. To best understand the life
cycle differences, you must first understand how a portal interacts with a portlet.
This interaction between portals and portlets is defined by the JSR-168 Portlet
specification. Typically a portal display in the web browser shows multiple
portlets. That is, when the portal page is displayed, there are actually multiple
portlets displaying or rendering their content. The job of the portal is to manage
how these portlets are displayed. Internally, the portal is responsible for telling
each portlet two things:
1. When to display itself
2. If there has been an action, such as a button press, performed inside that
portlet.
When a portal wants portlets to display their contents, the portal sends a render
request to each portlet showing in the portal. If one of the portlets showing in the
portal has a button tied to an action request, when that button is pushed, the
portal fires an action request for that portlet. In addition, the portal generates a
render request for the portlet and all other portlets on the page. You should note
that the portal implementation fires both action and render requests for each
action method on a portlet showing in the portal. The implication of this
interaction means the portlet page being rendered can not make assumptions
about the state of the values to be shown. Unlike a normal web application page,
a portlet page can't assume that the page being rendered in the Render Response
phase is the same page that was built in Restore View. If a portlet wants to
maintain state across repeated render requests, the portlet must use the session
bean to store stateful information.
The main point to remember is a portlet page must always be prepared to render
it's values from scratch or session data. This implies you should never bind
portlet page UI components to page bean properties or request bean properties.
Also, you should never rely on page bean instance variables that might be set
during an action event.
Hope this helps. -
Can I create a Stored Procedure That access data from tables of another servers?
I'm developing a procedure and within it I'm trying to access another server and make a select into a table that belongs to this another server. When I compile this procedure I have this error message: " PLS-00904: insufficient privilege to access object BC.CADPAP", where BC.CADPAP is the problematic table.
How can I use more than one connection into an Oracle Stored Procedure?
How I can access tables of a server from a Stored Procedure since the moment I'm already connected with another server?
Can I create a Stored Procedure That access data from tables of another servers?You need to have a Database Link between two servers. Then you could do execute that statement without any problem. Try to create a database link with the help of
CREATE DATABASE LINK command. Refer Document for further details
Maybe you are looking for
-
How to open and read binary files?
How do I open and read Binary files?
-
Problem executing programs after instaling Studio 8.1
I had JDK 1.5 installed on my computer. I earlier used to code my programs either in a text-editor or JCreator & run them from the command prompt. I recently installed Studio Enterprise 8.1. My programs execute successfully from the IDE. They also co
-
How to add new fields to a data extract
The following data extract program generates an output file which is displayed using a publisher template as a check format report. Oracle Payments Funds Disbursement Payment Instruction Extract 1.0 How would I add new fields to the generated file? I
-
I have iMovie '09 (8.0.6) iDVD 7.1.2 and am having a problem saving a large file (3.9gb) as a disc image. This file was made in iMovie with chapters and I "Shared to Media Browser" (I am using the .m4v file).I did a short (2 min.) test file in iDVD t
-
I am using my ipad 2 ( bought in USA) for about 3 years. 2 weeks back the touch gesture was completely out of control and the screen was jumping automatically to so many applications. I attempted to get original settings through settings menu,