Temp tables and dml commit
I need to create a temporary table that is transaction oriented (not session), insert data into it, and read the data. Apparently, each xsql:dml implies a commit. I cannot use "Preserve rows" (session related) in my table definition, and each commit (implied via the xsql:dml) deletes the rows that I need to read. Help?
Every DML operation is logged in the LOG file. Is that possible to insert the data in small chunks?
http://www.dfarber.com/computer-consulting-blog/2011/1/14/processing-hundreds-of-millions-records-got-much-easier.aspx
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Blog:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
Similar Messages
-
Use global temp table for DML error logging
our database is 11.2.0.4 enterprise edition on solaris 10
we are wondering if anyone has an opinion of or has done this before, to use a global temp table for DML error logging. We have a fairly busy transactional database with 2 hot tables for inserts. The regular error table created with dbms_errlog has caused many deadlocks which we don't quite understand yet. we have thought using global temp table for the purpose, and that seemed to work, but we can't read error from the GTT, the table is empty even reading from the same session as inserts. Does anyone have an idea why?
ThanksThe insert into the error logging table is done with a recursive transaction therefore it's private from your session which is doing the actual insert.
Adapted from http://oracle-base.com/articles/10g/dml-error-logging-10gr2.php
INSERT INTO dest
SELECT *
FROM source
LOG ERRORS INTO err$_dest ('INSERT') REJECT LIMIT UNLIMITED;
99,998 rows inserted.
select count(*) from dest;
COUNT(*)
99998
SELECT *
FROM err$_dest
WHERE ora_err_tag$ = 'INSERT';
1400 "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")" I INSERT 1000 Description for 1000
1400 "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")" I INSERT 10000 Description for 10000
1400 "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")" I INSERT 1000 Description for 1000
1400 "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")" I INSERT 10000 Description for 10000
rollback;
select count(*) from dest;
COUNT(*)
0
SELECT *
FROM err$_dest
WHERE ora_err_tag$ = 'INSERT';
1400 "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")" I INSERT 1000 Description for 1000
1400 "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")" I INSERT 10000 Description for 10000
1400 "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")" I INSERT 1000 Description for 1000
1400 "ORA-01400: cannot insert NULL into ("E668983_DBA"."DEST"."CODE")" I INSERT 10000 Description for 10000 -
Difference between Temp table and Variable table and which one is better performance wise?
Hello,
Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
Which one is recommended to use for better performance?
also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
Is that Table variable using Memory or Disk space?
Thanks Shiven:) If Answer is Helpful, Please VoteCheck following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
But it also depends upon specific scenarios you are dealing with , can you share it?
~manoj | email: http://scr.im/m22g
http://sqlwithmanoj.wordpress.com
MCCA 2011 | My FB Page -
Query is taking too much time for inserting into a temp table and for spooling
Hi,
I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
Temp table is defined as follows:
DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
SELECT
C.CastID,
SO.SalesOrderID,
PO.ProductionOrderID,
F.CalculatedWeight,
PO.ProductionOrderNo,
SO.SalesOrderNo,
SC.Name,
SO.OrderQty
FROM
CastCast C
JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
WHERE
(C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
-PepHow I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
I suggest you start with index tuning. Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible). Changing outer joins to inner joins is appropriate
if you don't need outer joins in the first place.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Temp table, and gather table stats
One of my developers is generating a report from Oracle. He loads a subset of the data he needs into a temp table, then creates an index on the temp table, and then runs his report from the temp table (which is a lot smaller than the original table).
My question is: Is it necessary to gather table statistics for the temp table, and the index on the temp table, before querying it ?It depends yesterday I have very bad experience with stats one of my table has NUM_ROWS 300 and count(*)-7million and database version is 9206(bad every with optimizer bugs) so queries starts breaking and lot of buffer busy and latch free it took while to figure out but I have deleted the stats and every thing came under control - my mean to say statistics are good and bad. Once you start collecting you should keep an eye.
Thanks. -
Insert data from an tabular to a temp table and fetching a columns.
Hi guys ,
I am working in apex 3.2 in which in a page i have a data's fom various tables and displays it in tabular form. Then i have to insert the tabular form data to a temp table and fetch the data from the temp table and insert into my main table. I think that i have to use a cursor to fetch the data from the temp table and insert into the main table but i didnt get the perfect example for doing this. Can any one help me to sort it out.
Thanks With regards
BalajiHi,
Follow this scenario.
Your Query:
SELECT t1.col1, t1.col2, t2.col1, t2.col2, t3.col1
FROM table1 t1, table2 t2, table3 t3
(where some join conditions);On insert button click call this process
DECLARE
temp1 VARCHAR2(100);
temp2 VARCHAR2(100);
temp3 VARCHAR2(100);
temp4 VARCHAR2(100);
temp5 VARCHAR2(100);
BEGIN
FOR i IN 1..apex_application.g_f01.COUNT
LOOP
temp1 := apex_application.g_f01(i);
temp2 := apex_application.g_f02(i);
temp3 := apex_application.g_f03(i);
temp4 := apex_application.g_f04(i);
temp5 := apex_application.g_f05(i);
INSERT INTO table1(col1, col2) VALUES(temp1, temp2);
INSERT INTO table2(col1, col2) VALUES(temp3, temp4);
INSERT INTO table3(col1) VALUES(temp5);
END LOOP;
END;You don't even need temp tables and cursor to make an insert into different tables.
Thanks,
Ramesh P.
*(If you know you got the correct answer or helpful answer, please mark as corresponding.)* -
Hi all,
Can someone tell me why when I create a GTT and insert the data like the followijng ,I get insert 14 rows msg. But when I do a select statement from sqlwork shop , sometimes i get the data sometimes I don't. my understanding is this data is supposed to stay during my logon session then got cleaned out when I exit session.
I am developing a screen in apex and will use this temp table for user to do some editing work. Once ithe editing is done then I save the data into a static table. Can this be done ? So far my every attempt to update the temp table always result to 0 rows updated and the temp table reversed back to 0 rows. CAn you help me ?
CREATE GLOBAL TEMPORARY TABLE "EMP_SESSION"
( "EMPNO" NUMBER NOT NULL ENABLE,
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER,
"HIREDATE" DATE,
"SAL" NUMBER,
"COMM" NUMBER,
"DEPTNO" NUMBER
) ON COMMIT PRESERVE ROWS
insert into emp_session( EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
select * from emp
select * from emp_session
-- sometimes I get 14 rows, sometimes 0 rows
Thanks.
TaiTai,
To say that Apex doesn't support GTT's is not quite correct. In order to understand why it is not working for you and how they may be of use in an Apex application, you have to understand the concept of a session in Apex as opposed to a conventional database session.
In a conventional database session, as when you are connected with sqlplus then you have what is known as a dedicated session, or a synchronous connection. Temporary objects such as GTTs and packaged variables can persist across calls to the database. A session in Apex however is asynchronous by nature and a connection to the database is done through some sort of a server such as the Oracle HTTP server or the Apex Listener, which in effect maintains a pool of connections to the database and calls by your application aren't guaranteed to get the same connection for each call.
To get over this, the guys who developed Apex came up with various methods to maintain session state and global objects that are persistent within the context of an Apex session. One of these is Apex collections, which are a device for maintaining collection like (array like) data that is persistent within an Apex session. These are Apex session specific objects in that they are local to the session that creates and maintains them.
With this knowledge, you can then see why the GTT is not working for you and also how a GTT may be of use in an Apex application, provided you don't expect the data to persist across a call, as in a PL/SQL procedure. You should note though, that unless you are dealing with very large datasets, then a regular Oracle collection is preferable.
I hope this explains your issue.
Regards
Andre -
Temp tables and transaction log
Hi All,
I am on SQL 2000.
When I am inserting(or updating or deleting) data to/from temp tables (i.e. # tables), is transaction log created for those DML operations?
The process is, we have a huge input dataset to process. So, we insert subset(s) of input data in temp table, treat that as our input set and do the processing in parts. Can I avoid transaction log generation for these intermediate steps?
Soon, we will be moving to 2008 R2. Are there any features in 2008, which can help me in avoiding this transaction logging?
Thanks in advanceEvery DML operation is logged in the LOG file. Is that possible to insert the data in small chunks?
http://www.dfarber.com/computer-consulting-blog/2011/1/14/processing-hundreds-of-millions-records-got-much-easier.aspx
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Blog:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance -
Difference betweem temp table and CTE as performance wise?
Hi Techies,
Can anyone explain CTE and Temp table performance wise. Which is the better object to use while implementing DML operations.
Thanks in advance.
Regards
Cham beeWelcome to the world of performance tuning in SQL Server! The standard answer to this kind of question is:
It depends.
A CTE is a logical construct, which specifies the logical computation order for the query. The optimizer is free to recast computation order in such away that the intermediate result from the CTE never exists during the calculation. Take for instance this
query:
WITH aggr AS (
SELECT account_no, SUM(amt) AS amt
FROM transactions
GROUP BY account_no
SELECT account_no, amt
FROM aggr
WHERE account_no BETWEEN 199 AND 399
Transactions is a big table, but there is an index on account_no. In this example, the optimizer will use that index and only compute the total amount for the accounts in the range. If you were to make a temp table of the CTE, SQL Server would have no choice
to scan the entire table.
But there also situations when it is better to use a temp table. This is often a good strategy when the CTE appears multiple times in the query. The optimizer is not able to pick a plan where the CTE is computed once, so it may compute the CTE multiple times.
(To muddle the waters further, the optimizers in some competing products have this capability.)
Even if the CTE is only referred to once, it may help to materialise the CTE. The temp table has statistics, and those statistics may help the optimizer to compute a better plan for the rest of the query.
For the case you have at hand, it's a little difficult to tell, because it is not clear to me if the conditions are the same for points 1, 2 and 3 or if they are different. But the second one, removing duplicates, can be quite difficult with a temp table,
but is fairly simple using a CTE with row_number().
Erland Sommarskog, SQL Server MVP, [email protected] -
What are these DR$TEMP % tables and can they be deleted?
We are generating PL/SQL using ODMr and then periodically running the model in batch mode. Out of 26,542 objects in the DMUSER1 schema, 25,574 are DR$TEMP% tables. Should the process be cleaning itself up or is this supposed to be a manual process? Is the cleanup documented somewhere?
ThanksHi Doug,
The only DR$ tables/indexes built are the ones generated by the Build,Apply and Test Activities. I confirmed that they are deleted in ODMr 10.2.0.3. As I noted earlier, there was a bug in ODMr 10.2.0.2 which would lead to leakage when deleting Activities. You will have DR$ table around for existing Activities, so do not delete these without validating they are no longer part of an existing Activity.
You can track down the DR$ objects associated to an Activity by viewing the text step in the activity and finding the table generated for the text data. This table will have a text index created on it. The name of that text index is used as a base name for several tables which Oracle text utilizes.
Again, all of these are deleted when you delete an Activity with ODMr 10.2.0.3.
Thanks, Mark -
Temp tables and deferred updates
Does anyone know why the following update to #test and #test1 is deferred, but the same update to the permanent table inputtable is direct?
I haven't found any documentation that would explain this.
@@version is Adaptive Server Enterprise/15.7.0/EBF 22305 SMP SP61 /P/Sun_svr4/OS 5.10/ase157sp6x/3341/64-bit/FBO/Fri Feb 21 11:55:38 2014
create proc proctest
as
begin
-- inputtable.fiId is int not null
-- Why this is a deferred update?
select fiId into #test from inputtable
update #test set fiId = 0
-- Why this is a deferred update?
create table #test1(fiId int not null)
insert #test1 select fiId from inputtable
update #test1 set fiId = 0
-- Yay. This is a direct update.
update inputtable set fiId = 0
end
go
set showplan on
go
exec proctest
go
|ROOT:EMIT Operator (VA = 2)
|
| |UPDATE Operator (VA = 1)
| | The update mode is deferred.
| |
| | |SCAN Operator (VA = 0)
| | | FROM TABLE
| | | #test
| | | Table Scan.
| | | Forward Scan.
| | | Positioning at start of table.
| | | Using I/O Size 16 Kbytes for data pages.
| | | With LRU Buffer Replacement Strategy for data pages.
| |
| | TO TABLE
| | #test
| | Using I/O Size 2 Kbytes for data pages.
|ROOT:EMIT Operator (VA = 2)
|
| |UPDATE Operator (VA = 1)
| | The update mode is deferred.
| |
| | |SCAN Operator (VA = 0)
| | | FROM TABLE
| | | #test1
| | | Table Scan.
| | | Forward Scan.
| | | Positioning at start of table.
| | | Using I/O Size 16 Kbytes for data pages.
| | | With LRU Buffer Replacement Strategy for data pages.
| |
| | TO TABLE
| | #test1
| | Using I/O Size 2 Kbytes for data pages.
|ROOT:EMIT Operator (VA = 2)
|
| |UPDATE Operator (VA = 1)
| | The update mode is direct.
| |
| | |SCAN Operator (VA = 0)
| | | FROM TABLE
| | | inputtable
| | | Table Scan.
| | | Forward Scan.
| | | Positioning at start of table.
| | | Using I/O Size 16 Kbytes for data pages.
| | | With LRU Buffer Replacement Strategy for data pages.
| |
| | TO TABLE
| | inputtable
| | Using I/O Size 2 Kbytes for data pages.I don't have a documentation reference but the optimizer appears to default to deferred mode when the #table and follow-on DML operation are in the same batch (ie, optimizer makes a 'safe' guess during optimization based on limited details of #table schema).
You can get the queries to operate in direct mode by forcing the optimizer to (re)compile the UPDATEs after the #tables have been created, eg:
- create #table outside of proc; during proc creation/execution the #tables already exist so optimizer can choose direct mode
- perform UPDATEs within exec() construct; exec() calls are processed within a separate/subordinate context, ie, #table is know at time exec() call is compiled so direct mode can be chosen; obvious downside is the overhead for the exec() call and associated compilation phase ... which may be an improvement over a) executing UPDATE in deferred mode and/or b) recompiling the proc (see next bullet), ymmv
- induce a schema change to the #table so proc is recompiled (with #table details known during the recompile) thus allowing use of direct mode; while adding/dropping indexes/constraints/columns will suffice these also add extra processing overhead; I'd suggest a fairly benign schema change that also has little/no effect on table (eg, alter table #test replace fiId default null); obvious downside to this approach is the forced recompilation of the stored proc, which could add considerably to proc run times depending on volume/complexity of queries in the rest of the proc -
HOW TO STORE FETCH DATA IN TEMP TABLE AND HOW CAN I USE THAT FURTHER
I WANT TO STORE THIS FETCH DATA IN SUM VALUE IN TEMP TABLE THEN I WANT TO USE THIS VALUE IN ANOTHER
CODING. HELP ME TO DO THIS?
SELECT SUM(SIGNEDDATA)
FROM FACPLAN
WHERE TIMEID IN
(SELECT TIMEID FROM Time
WHERE ID IN
(SELECT CURRENT_MONTH FROM mbrVERSION WHERE CURRENT_MONTH!=''))If you want assign to variable:
DECLARE @SUMAMOUNT INT - -you may change the datatype as required
Set @SUMAMOUNT = (SELECT SUM(SIGNEDDATA)
FROM FACPLAN
WHERE TIMEID IN
(SELECT TIMEID FROM Time
WHERE ID IN
(SELECT CURRENT_MONTH FROM mbrVERSION WHERE CURRENT_MONTH!='')))
And you can use @SUMAMOUNT for further processing
If you want to store it in a table
SELECT SUM(SIGNEDDATA) as SUMAMOUNT into #Temp
FROM FACPLAN
WHERE TIMEID IN
(SELECT TIMEID FROM Time
WHERE ID IN
(SELECT CURRENT_MONTH FROM mbrVERSION WHERE CURRENT_MONTH!='')) -
Trees, temp tables and apex
Hello,
Has anyone had any luck building trees that go against temp tables? My tree works great with a regular table but runs flaky when I change the table to a temp table. Is this a limitation with APEX?
Thanks in advance,
SamTemporary tables that belong to a database session are not reliably accessible across Application Express page requests. You should look at apex collections for temporary storage that will be persistent for the life of the apex session.
Scott -
Problem w/ Oracle temp tables and WL61SP3
I have an odd problem that arises when I use Weblogic 6.1, Oracle, callable statements,
and a stored procedure that uses temporary tables. Specifically I don't get the
result set pack I expect.
Specifically, I have a stored procedure which first populates a temporary table,
then joins the temporary table with the additional tables to build the final result
set--i.e. which it returns as a cursor.
I've tested this procedure and it works fine when called from PLSQL and from Jboss
(using Oracle's driver). When I switch to Weblogic 6.1, *using the same database
and the same Oracle driver*, the returned result set (cursor) has no records.
I've added additional debugging to the stored procedure and find that it indeed
is populating the temporary table, but for some reason, the select/join acts like
the temporary table has no records. Similar 'exists' or 'in' clauses likewise
do not work.
In playing with the stored procedure I found that removing the join with the
temporary table brings back results. This isn't functionally correct--i.e. the
final result set has too many rows--but it confirmed that the problem lies in
the temporary tables.
Again, we've developed to run on multiple app servers so can switch from jboss
to Weblogic simply by running an ant task. The same callable statement executed
using the same driver, but from different app servers lead to different results.
The next step is to try to mess with the 'on commit preserve rows' clause in the
'create global temporary table' section and see if this has any effect. This procedure
doesn't perform any commits, however, so this shouldn't produce any changes.
Any suggestions? Thanks in advance.
bill milbratzwilliam milbratz wrote:
I have an odd problem that arises when I use Weblogic 6.1, Oracle, callable statements,
and a stored procedure that uses temporary tables. Specifically I don't get the
result set pack I expect.
Specifically, I have a stored procedure which first populates a temporary table,
then joins the temporary table with the additional tables to build the final result
set--i.e. which it returns as a cursor.
I've tested this procedure and it works fine when called from PLSQL and from Jboss
(using Oracle's driver). When I switch to Weblogic 6.1, *using the same database
and the same Oracle driver*, the returned result set (cursor) has no records.First, make sure you are using the same driver, by ensuring the driver you want
is in front of all weblogic stuff in the classpath the server startup script creates for the
server. We ship a version of oracle's thin driver, so it could be picked up instead
of yours if the classpath is weblogic-first.
Second, and this is a long shot, are you using our connection pools? I guess not,
but let me know...
Joe
>
>
I've added additional debugging to the stored procedure and find that it indeed
is populating the temporary table, but for some reason, the select/join acts like
the temporary table has no records. Similar 'exists' or 'in' clauses likewise
do not work.
In playing with the stored procedure I found that removing the join with the
temporary table brings back results. This isn't functionally correct--i.e. the
final result set has too many rows--but it confirmed that the problem lies in
the temporary tables.
Again, we've developed to run on multiple app servers so can switch from jboss
to Weblogic simply by running an ant task. The same callable statement executed
using the same driver, but from different app servers lead to different results.
The next step is to try to mess with the 'on commit preserve rows' clause in the
'create global temporary table' section and see if this has any effect. This procedure
doesn't perform any commits, however, so this shouldn't produce any changes.
Any suggestions? Thanks in advance.
bill milbratz -
How can I implement the equivilent of a temporary table with "on commit delete rows"?
hi,
I have triggers on several tables. During a transaction, I need to gather information from all of them, and once one of the triggers has all the information, it creates some data. I Can't rely on the order of the triggers.
In Oracle and DB2, I'm using temporary tables with "ON COMMIT DELETE ROWS" to gather the information - They fit perfectly to the situation since I don't want any information to be passed between different transactions.
In SQL Server, there are local temporary tables and global. Local temp tables don't work for me since apparently they get deleted at the end of the trigger. Global tables keep the data between transactions.
I could use global tables and add some field that identifies the transaction, and in each access to these tables join by this field, but didn't find how to get some unique identifier for the transaction. @@SPID is the session, and sys.dm_tran_current_transaction
is not accessible by the user I'm supposed to work with.
Also with global tables, I can't just wipe data when "operation is done" since at the triggers level I cannot identify when the operation was done, transaction was committed and no other triggers are expected to fire.
Any idea which construct I could use to acheive the above - passing information between different triggers in the same transaction, while keeping the data visible to the current transaction?
(I saw similar questions but didn't see an adequate answer, sorry if posting something that was already asked).
Thanks!This is the scenario: If changes (CRUD) happen to both TableA and TableB, then log some info to TableC. Logic looks something like this:
Create Trigger TableA_C After Insert on TableA {
If info in temp tables available from TableB
Write info to TableC
else
Write to temp tables info from TableA
Create Trigger TableB_C After Insert on TableB {
If info in temp tables available from TableA
Write info to TableC
else
Write to temp tables info from TableB
So each trigger needs info from the other table, and once everything is available, info to TableC is written. Info is only from the current transaction.
Order of the triggers is not defined. Also there's no gurantee that both triggers would fire - changes can happen only to TableA / B and in that case I don't want to write anything to TableC.
The part that gets and sets info to temp table is implemented as temp tables with "on commit delete rows" in DB2 / Oracle.
What do you think? As I've mentioned, I could use global temp tables with a field that would identify the transaction, but didn't find something like that in SQL Server. And, the lifespan of local temp tables is too short.
Maybe you are looking for
-
I have 2 laptops and both have liberies,how can I transfer the music from my 1st account (laptop) into the newer one which I made a new account for.My Iclassic has music fro both liberies, but once I purchase a new song and try to sync, it into my ic
-
Can we have a multiselect box in OIM forms?
Hi guys, How can we have a multiselect in the object form.As I can see default form doest support this. Is there any way we can include one. phil
-
GWIA not routing "retract original" cal msg to external user
Hello, I have an environment with some users on a non-Groupwise calendar server. I invite these users to a recurring meeting and reschedule the entire meeting (via "resend"). When I click "yes" to retract the original meeting, the GWIA shows some act
-
Imageready 3.0 Export DISABLED???
What happened to my Imageready 3.0? I used to be able to open animated gifs, then export the animation frames as files, but now the "export" option is greyed-out! I KNOW this USED to work. Did some background "update" disable this feature? If so, I
-
I am trying to make a "growing" arrow: A small arrow: -------> would extend to --------------------------------> I tried with shape tweening and shape hints and it works fine. The problem is, if I try to duplicate the layer to use my arrow elsewhere