Sequence column and index
Hi
I have a column and its values are populated by a sequence. will that column work as an index? or do I need to create an explicit index? Please explain..
Edited by: Adi_consultant on Sep 25, 2008 11:04 PM
Hi,
Adi_consultant wrote:
So I need to create an index on a sequence based column, is it correct?Short, colloquial answer:
Yes.
Sesquipedalian, pedantic answer:
Idexes are optional (except to enfore UNIQUE constraints). You don't need to create one.
If you want to have an index, it has to be created, either
(a) implicity (by adding a UNIQUE or PRIMARY KEY constraint), or
(b) explicitly (CREATE INDEX ...).
If, according to your business rules, the column values are unique, then I strongly encourage (a).
Similar Messages
-
1)If i miss the sequence while importing tables how can i restore the correct sequence value for that particular column of that table.
2) I created 2 indexes for a table the table is big around 3 gb.The indexes also around 1.3 and 1 gb each.when i am trying to use select count(*) from table_name it takes long time to get the result.If i am using witout indexes it is giving fast result than previous.Why this is happening.If i drop the indexes whehter it will give more performance.This is oracle 7.3.4 on soalris 7.How can i avoid the index scan any hints to use full tablescan
with regards
ramyaFULL hint:
SQL> set autotrace traceonly explain
SQL> select * from emp e where empno = 12 ;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=41)
1 0 FILTER
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=41)
3 2 INDEX (UNIQUE SCAN) OF 'PK_EMP' (UNIQUE)
SQL> select /*+ FULL(e) */ * from emp e where empno = 12 ;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=41)
1 0 FILTER
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=41)
SQL> disconnect
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.3.0 - Production
SQL> -
What index is suitable for a table with no unique columns and no primary key
alpha
beta
gamma
col1
col2
col3
100
1
-1
a
b
c
100
1
-2
d
e
f
101
1
-2
t
t
y
102
2
1
j
k
l
Sample data above and below is the dataype for each one of them
alpha datatype- string
beta datatype-integer
gamma datatype-integer
col1,col2,col3 are all string datatypes.
Note:columns are not unique and we would be using alpha,beta,gamma to uniquely identify a record .Now as you see my sample data this is in a table which doesnt have index .I would like to have a index created covering these columns (alpha,beta,gamma) .I
beleive that creating clustered index having covering columns will be better.
What would you recommend the index type should be here in this case.Say data volume is 1 milion records and we always use the alpha,beta,gamma columns when we filiter or query records
what index is suitable for a table with no unique columns and primary key?
col1
col2
col3
MudassarMany thanks for your explanation .
When I tried querying using the below query on my heap table the sql server suggested to create NON CLUSTERED INDEX INCLUDING columns ,[beta],[gamma] ,[col1]
,[col2] ,[col3]
SELECT [alpha]
,[beta]
,[gamma]
,[col1]
,[col2]
,[col3]
FROM [TEST].[dbo].[Test]
where [alpha]='10100'
My question is why it didn't suggest Clustered INDEX and chose NON clustered index ?
Mudassar -
Select from table in group of columns and generate a sequence number
I have to select data from a table in group of columns and generate a sequence for every group resetting the sequence to start from 1 onwards.
For example:
Data:
Col1 Col2 Col3 Col4
A NA KA 2009-08-13
B NA KA 2009-08-13
C NA KA 2009-08-13
A NA KA 2009-08-13
B NA KA 2009-08-13
A NA KA 2009-08-13
Expected output from Select Statement:
Col1 Col2 Col3 Col4 Seq_No
A NA KA 2009-08-13 1
A NA KA 2009-08-13 2
A NA KA 2009-08-13 3
B NA KA 2009-08-13 1
B NA KA 2009-08-13 2
C NA KA 2009-08-13 1
How can this be possible with a SELECT statement? Is it possible to assign seq numbers for a group of columns and reset it when it changes? In the above example, all columns form the key to generate the seq number
I know it can be done using Stored procedures and that is how I am doing it now by introducing a temporary table.
Can anyone help me in this regard? Please let me know if the question is vague to understand!
Thanks,
Nachiwith t as(select 'A' col1,'NA' col2 ,'KA' col3,'2009-08-13' col4 from dual
union all
select 'B' col1,'NA' col2 ,'KA' col3,'2009-08-13' col4 from dual
union all
select 'C' col1,'NA' col2 ,'KA' col3,'2009-08-13' col4 from dual
union all
select 'A' col1,'NA' col2 ,'KA' col3,'2009-08-13' col4 from dual
union all
select 'B' col1,'NA' col2 ,'KA' col3,'2009-08-13' col4 from dual
union all
select 'A' col1,'NA' col2 ,'KA' col3,'2009-08-13' col4 from dual)
select t.*,row_number() over (partition by col1,col2,col3,col4 order by col1,col2,col3,col4) from tYou can replace partition by col1,col2,col3,col4 with only columns that you need for grouping condition
and also order by you can just do on the column you need. -
Maximum length allowed for column name, index name and table name?
Hi,
I want to know what is the maximum length allowed for coulmn name, table name and index name in MaxDB ?
Regards
RajHi Raja,
simply check the catalog:
sqlcli bwt=> \dc domain.columns
Table "DOMAIN.COLUMNS"
| Column Name | Type | Length | Nullable | KEYPOS |
| ---------------- | ------------ | ------ | -------- | ------ |
| SCHEMANAME | CHAR UNICODE | 32 | YES | |
| OWNER | CHAR UNICODE | 32 | YES | |
| TABLENAME | CHAR UNICODE | 32 | YES | |
| COLUMNNAME | CHAR UNICODE | 32 | YES | |
and
sqlcli bwt=> \dc domain.indexes
Table "DOMAIN.INDEXES"
| Column Name | Type | Length | Nullable | KEYPOS |
| ------------------ | ------------ | ------ | -------- | ------ |
| SCHEMANAME | CHAR UNICODE | 32 | YES | |
| OWNER | CHAR UNICODE | 32 | YES | |
| TABLENAME | CHAR UNICODE | 32 | YES | |
| INDEXNAME | CHAR UNICODE | 32 | YES | |
regards,
Lars -
How to pick Cobol Sequence and Index files in XI
Hi,
Has anyone worked with the scenario to pick Cobol Sequence and Index files in XI?Hi,
The Cobol sequence means are you talking about the GDG sequence, then You could use the Masking concept but if you are expecting to pick up the fiels in same sequence then its not possible.
For that you need to design a program that will add the another file when the first file will be processed with XI (You need to delete or archive the fiel after processing)
Thanks
swarup -
Spatial index on table with object-column (and inheritance)
Hi!
Is it possible to create a spatial index on a table with an object-column (and inheritance) like this:
CREATE OR REPLACE TYPE feature_type AS OBJECT (
shape MDSYS.SDO_GEOMETRY
) NOT FINAL;
CREATE OR REPLACE TYPE building_type UNDER feature_type (
name VARCHAR2(50)
CREATE TABLE features ( no NUMBER PRIMARY KEY, feature feature_type);
[...] user_sdo_geom_metadata [...]
Then
CREATE INDEX features_idx ON features(feature.shape) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
throws:
ORA-01418: specified index does not exist
Curious! :)
If I define feature_type with "NOT FINAL" option but without subtypes, I get (create index):
ORA-00604: error occurred at recursive SQL level 1
ORA-01460: unimplemented or unreasonable conversion requested
So I think besides object tables also inheritance isn't supported whith oracle spatial!?
Thanks,
Michael
ps:
We use Oracle9i Enterprise Edition Release 9.0.1.4.0 / Linux.
Solves Oracle9i 9.2 this problems?Hi
You'll need to be on 9.2 to do this....
Dom -
Initialize sub sequence column values on insert?
I asked this on stack overflow, but it was recommended that I also ask here.
http://stackoverflow.com/questions/12982875/initialize-sub-sequence-column-values-on-insert-oracle
I would like my table to sequence its "order by" column based on it's TEMPLATE_ID. I would like this to happen on insert (via an insert trigger, probably). For example, if I run the following inserts, I should get the following table values.
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (1, 1)
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (2, 1)
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (3, 1)
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (4, 2)
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (5, 2)
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (6, 2)
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (7, 2)
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (8, 3)
ID TEMPLATE_ID ORDER_BY
1 1 1
2 1 2
3 1 3
4 2 1
5 2 2
6 2 3
7 2 4
8 3 1I first tried to create this trigger, but it gives me an error when I insert.
create or replace
trigger TEMPLATE_ATTRIBUTES_AF_INS_TRIG
after insert on TEMPLATE_ATTRIBUTES
for each row
begin
if :NEW.ORDER_BY is null then
update TEMPLATE_ATTRIBUTES
set ORDER_BY = (select coalesce(MAX(ta.ORDER_BY), 0) + 1 from TEMPLATE_ATTRIBUTES ta where ta.TEMPLATE_ID = :NEW.TEMPLATE_ID)
where ID = :NEW.ID;
end if;
end;The error it gives me is: "table TEMPLATE_ATTRIBUTES is mutating, trigger/function may not see it"
So I need a different way to build this trigger. And I also need it to "thread safe" so that if these two inserts occur on different sessions at the same time, then the resulting records will still get different "ORDER_BY" values:
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (1, 1)
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (2, 1)
Edit:
I tried the common work around for the "table is mutating, trigger/function may not see it" and the work around "worked" but it was not "thread safe." I tried to add locking but it gave me another error on insert
create or replace package state_pkg
as
type ridArray is table of rowid index by binary_integer;
newRows ridArray;
empty ridArray;
end;
create or replace trigger TEMPLATE_ATTRIBUTES_ORDER_BY_TB4
before insert on TEMPLATE_ATTRIBUTES
begin
state_pkg.newRows := state_pkg.empty;
end;
create or replace trigger TEMPLATE_ATTRIBUTES_ORDER_BY_TAF1
after insert on TEMPLATE_ATTRIBUTES for each row
begin
if :NEW.ORDER_BY is null then
state_pkg.newRows( state_pkg.newRows.count+1 ) := :new.rowid;
end if;
end;
create or replace trigger TEMPLATE_ATTRIBUTES_ORDER_BY_TAF2
after insert on TEMPLATE_ATTRIBUTES
declare
v_request number;
v_lockhandle varchar2(200);
begin
dbms_lock.allocate_unique('TEMPLATE_ATTRIBUTES_ORDER_BY_lock', v_lockhandle);
while v_request <> 0 loop
v_request:= dbms_lock.request(v_lockhandle, dbms_lock.x_mode);
end loop;
begin
for i in 1 .. state_pkg.newRows.count loop
update TEMPLATE_ATTRIBUTES
set ORDER_BY = (select coalesce(MAX(q.ORDER_BY), 0) + 1 from TEMPLATE_ATTRIBUTES q where q.TEMPLATE_ID = (select q2.TEMPLATE_ID from TEMPLATE_ATTRIBUTES q2 where q2.rowid = state_pkg.newRows(i)))
where rowid = state_pkg.newRows(i);
end loop;
v_request:= dbms_lock.release(v_lockhandle);
EXCEPTION WHEN OTHERS THEN
v_request:= dbms_lock.release(v_lockhandle);
raise;
end;
end;This gives me:
ORA-04092: cannot COMMIT in a trigger ORA-06512: at "SYS.DBMS_LOCK", line 250 ORA-06512: at "TEMPLATE_ATTRIBUTES_ORDER_BY_TAF2", line 5 ORA-04088: error during execution of trigger 'TEMPLATE_ATTRIBUTES_ORDER_BY_TAF2' ORA-06512
Edit 2: The ORDER_BY column must be an updateable column. ID actually uses a sequence and before insert trigger to set its values. I thought I was simplifying my question when I included it in the insert examples, but that was incorrect. ORDER_BY's initial value is not really related to ID, but rather to what order the records are inserted. But ID is sequenced so you can use that if it helps.Check here below:
create table TEMPLATE_ATTRIBUTES
( ID INTEGER
, TEMPLATE_ID INTEGER
, ORDER_BY INTEGER
CREATE OR REPLACE TRIGGER templ_attr_bf_ins_trg
BEFORE INSERT
ON template_attributes
FOR EACH ROW
BEGIN
IF :new.order_by IS NULL
THEN
SELECT NVL (MAX (ta.order_by), 0) + 1
INTO :new.order_by
FROM template_attributes ta
WHERE ta.template_id = :new.template_id;
END IF;
END;
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (1, 1);
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (2, 1);
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (3, 1);
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (4, 2);
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (5, 2);
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (6, 2);
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (7, 2);
INSERT INTO TEMPLATE_ATTRIBUTES (ID, TEMPLATE_ID) VALUES (8, 3);
SELECT * FROM TEMPLATE_ATTRIBUTES;
Output:
ID TEMPLATE_ID ORDER_BY
1 1 1
2 1 2
3 1 3
4 2 1
5 2 2
6 2 3
7 2 4
8 3 1
{code}
Let me also comment that I don't like this solution. It might conflict with multiuser access.
If you just need the column to order the table you can use a sequence and then order you table by template_id, order_by (generated by a sequence).
In this way you will not have a problem with multiuser access. Do you care that order_by column is not starting from 1 for each template_id and it has "holes" in the sequence for that template_id?
Regards.
Al
Edited by: Alberto Faenza on Oct 22, 2012 5:11 PM -
Multi-column BITMAP index vs. multiple BITMAP indices?
Given the table (simple, made-up example):
CREATE TABLE applicant_diversity_info (
applicant_diversity_id NUMBER(12), PRIMARY KEY(applicant_diversity_id),
apply_date DATE,
ssn_salted_md5 RAW(16),
gender CHAR(1), CHECK ( (gender IS NULL OR gender IN ('M','F')) ),
racial_continent VARCHAR2(30), CHECK ( (racial_continent IS NULL
OR racial_continent IN ('Europe','Africa','America','Asia_Pacific')) ),
ethnic_supergroup VARCHAR2(30), CHECK ( (ethnic_supergroup IS NULL OR ethnic_supergroup IN ('Latin American','Other')) ),
hire_salary NUMBER(11,2),
hire_month DATE,
termination_salary NUMBER(11,2),
termination_month DATE,
termination_cause VARCHAR2(30), CHECK ( (termination_cause IS NULL
OR termination_cause IN ('Resigned','Leave of Absence','Laid Off','Performance','Cause')) )
Oracle (syntactically) allows me to create either one BITMAP index over all four small-cardinality columns
CREATE BITMAP INDEX applicant_diversity_diversity_idx ON applicant_diversity_info (
gender, racial_continent, ethnic_supergroup, termination_reason );
or four independent indexes
CREATE BITMAP INDEX applicant_diversity_gender_idx ON applicant_diversity_info ( gender );
CREATE BITMAP INDEX applicant_diversity_race_idx ON applicant_diversity_info ( raceial_continent );
etc.
What is the difference between the two approaches; is there any meaningful difference in disk-space between the one multi-colum index and the four single-column indexes? Does it make a difference in what the query-planner will consider?
And, if I define one multi-column BITMAP index, does the order of columns matter?>
What is the difference between the two approaches; is there any meaningful difference in disk-space between the one multi-colum index and the four single-column indexes? Does it make a difference in what the query-planner will consider?
And, if I define one multi-column BITMAP index, does the order of columns matter?
>
You may want to read this two-part blog, that answers that exact question, by recognized expert Richard Foote
http://richardfoote.wordpress.com/2010/05/06/concatenated-bitmap-indexes-part-i-two-of-us/
http://richardfoote.wordpress.com/2010/05/12/concatenated-bitmap-indexes-part-ii-everybodys-got-to-learn-sometime/
As with many things Oracle the answer is 'it depends'.
In short the same considerations apply for a concatenated index whether it is bitmap or b-tree: 1) will the leading column usually be in the predicate and 2) will most or all of the index columns be specified in the queries.
Here are some quotes from part 1
>
Many of the same issues and factors in deciding to create a single, multi-column index vs. several, single column indexes apply to Bitmap indexes as they do with B-Tree indexes, although there are a number of key differences to consider as well.
Another thing to note regarding a concatenated Bitmap index is that the potential number of index entries is a product of distinct combinations of data of the indexed columns.
A concatenated Bitmap index can potentially use less or more space than corresponding single column indexes, it depends on the number of index entries that are derived and the distribution of the data with the table.
>
Here is the lead quote from part 2
>
The issues regarding whether to go for single column indexes vs. concatenated indexes are similar for Bitmap indexes as they are for B-Tree indexes.
It’s generally more efficient to access a concatenated index as it’s only the one index with less processing and less throwaway rowids/rows to contend with. However it’s more flexible to have single column indexes, especially for Bitmap indexes that are kinda designed to be used concurrently, as concatenated indexes are heavily dependant on the leading column being known in queries. -
For some business requirements, users want to extract values from a multi-value enabled lookup column
and add items to another list based on each separate value. In contrast, others want to find duplicate values in the list and merge associated values to a multi-value enabled column and then
add items to another list based on the merged value. All of these can be achieved using SharePoint Designer 2013 Workflow.
How to extract values from a multi-value enabled lookup column and add items to another list based
on each separate value using SharePoint Designer 2013.
Important actions: Loop Shape; Utility Actions
Three scenarios
Things to note
Steps to create Workflow
How to merge values to a multi-value enabled column and add item to another list based on the
merged value using SharePoint Designer 2013.
Important actions: Call HTTP Web Service; Build Dictionary
Things to note
Steps to create Workflow
How to
extract values from a multi-value enabled lookup column and
add items to another list based on each separate value using SharePoint Designer 2013.
For example, they have three lists as below. They want to
extract values from the Destinations column
in Lookup2 and add items to Lookup3 based on each country and set Title to current item: ID.
Lookup1:
Title (Single line of text)
Lookup2:
Title (Single line of text), Destinations (Lookup; Get information from: Lookup1 in Title column).\
Lookup3:
Title (Single line of text), Country (Single line of text).
Important action
1. Loop Shape: SharePoint Designer 2013 support two types of loops: loop n times and loop with condition.
Loops must also conform to the following rules:
Loops must be within a stage, and stages cannot be within a loop.
Steps may be within a loop.
Loops may have only one entry and one exit point.
2. Utility Actions: It contains many actions, such as ‘Extract Substring from Index of String’ and ‘Find substring in String’.
Three scenarios
We need to loop through the string returned from the look up column and look for commas. There are three
scenarios:
1. No comma but string is non-empty so there is only one country.
2. At least one comma so there is at least two or more countries to loop.
3. In the loop we have consumed all the commas so we have found the last country.
Things to note
There are two things to note:
1. "Find string in string (output to Variable:index)" will return -1 if doesn't find
the searched for string.
2. In the opening statement "Set Variable: Countries to Current Item:Destinations" set the return
field as "Lookup Values, Comma Delimited".
Steps to create Workflow
Create a custom list named Lookup1.
Create a custom list named Lookup2, add column: Destinations (Lookup; Get information from: Lookup1 in Title column).
Create a custom list named Lookup3, add column: Country (Single line of text).
Create a workflow associated to Lookup2.
Add conditions and actions:
Start the workflow automatically when an item is created.
Add item to Lookup2, then workflow will be started automatically and create multiple items to lookup3.
See the below in workflow History List:
How to merge values to a multi-value enabled column and add item to another list based on the
merged value using SharePoint Designer 2013
For example, they have three lists as below. They want to find duplicate values in the Title column in
Lookup3 and merge country column to a multi-value enabled column and then add item to lookup2 and set the Title to Current Item: Title.
Lookup1:
Title (Single line of text)
Lookup3:
Title (Single line of text), Country (Single line of text).
Lookup2:
Title (Single line of text), Test (Single line of text).
Important actions
"Call HTTP Web Service"
action: In SharePoint 2013 workflows, we can call a web service using a new action introduced in SharePoint 2013 named Call HTTP Web Service. This action
is flexible and allows you to make simple calls to a web service easily, or, if needed, you can create more complex calls using HTTP verbs as well as allowing you to add HTTP headers.
“Build Dictionary"
action:
The Dictionary variable type is a new variable type in the SharePoint 2013 Workflow.
The following are the three actions specifically designed for the Dictionary variable type: Build Dictionary, Count Items in a Dictionary and Get an Item from a Dictionary.
The "Call HTTP Web Service" workflow action would be useless without the new "Dictionary" workflow action.
Things to note
The
HTTP URI is set to https://sitename/_api/web/lists/GetByTitle('listname')/items?$orderby=Id%20desc and the HTTP method is set to “GET”. Then the list will be sort by Id in descending order.
Use Get
d/results(0)/Id form
Variable: ResponseContent (Output to
Variable: maxid) to get the Max ID.
Use Set
Variable: minid to Current List:ID to get the Min ID.
Use Copy from
Variable: destianation , starting at
1 (Output to
Variable: destianation) to remove the space.
Steps to create Workflow
Create a custom list named Lookup1.
Create a custom list named Lookup2, add column: Test (Single line of text).
Create a custom list named Lookup3, add column: Country (Single line of text).
Create a workflow associated to Lookup3.
Add a new "Build Dictionary" action
to define the http request header:
Add a Call HTTP Web Serviceaction, click on
this and paste your http request.
To associate the
RequestHeader variable, select the Call action property,
set the
RequestHeaders property to
RequestHeader:
In the Call action, click on
response and associate the response to a new
variable: ResponseContent (of type Dictionary).
After the Call action add Get item from Dictionary action to get the Max ID.
Add Set Workflow Variable action to get the Min ID.
Add Loop Shape (Loop with Condition) to get all the duplicate titles and integrate them to a string.
Create item in Lookup2.
The final Stage should look like this:
Start the workflow automatically when an item is created.
Add item to Lookup3, then workflow will be started automatically and create item to lookup2.
See the below in workflow History List:
References
SharePoint Designer 2013 - Extracting values from a multi-value enabled lookup column into a dictionary as separate items:
http://social.technet.microsoft.com/Forums/en-US/97d34468-1b53-4741-88b0-958472f8ca9a/sharepoint-designer-2013-extracting-values-from-a-multivalue-enabled-lookup-column-into-a
Workflow actions quick reference (SharePoint 2013 Workflow platform):
http://msdn.microsoft.com/en-us/library/jj164026.aspx
Understanding Dictionary actions in SharePoint Designer 2013:
http://msdn.microsoft.com/en-us/library/office/jj554504.aspx
Working with Web Services in SharePoint 2013 Workflows using SharePoint Designer 2013:
http://msdn.microsoft.com/en-us/library/office/dn567558.aspx
Calling the SharePoint 2013 Rest API from a SharePoint Designer Workflow:
http://sergeluca.wordpress.com/2013/04/09/calling-the-sharepoint-2013-rest-api-from-a-sharepoint-designer-workflow/GREAT info, but it may be helpful to note that when replacing a portion of the variable "Countries" with a whitespace character, you may cause the workflow to fail in a few specific cases (certain lookup fields will not accept this and will automatically
cancel). I only found this out when recreating your workflow on a similar, but much more complex list set.
To resolve this issue, I used another utility action (Extract Substring from Index of List) to clear out the whitespace. I configured it as "Copy from
Variable: Countries, starting at
1 (Output to Variable: Countries), which takes care of this issue in those few cases.
Otherwise, WOW! AWESOME JOB! Thanks! :) -
Performance issue and indexing doesn't help
I created a view the SQL is basically simple but I need to group data based on a value returned from a function. I think this is slowing the performance. I first added a regular index on this, then added a function based index but neither helps. I get the data I need, but it takes too long. I hope someone can give me some ideas about how to optimize the performance of my SQL.
The base table has 1318408 rows. I need to select only a few columns and group and count the rows based on a date value in one of the columns. However the date in the table contains only an end of week value and I need to report based on quarters. The report needs both a date value and a text value for the quarter. So I created two functions that accept a date value and return a date value for the quarter start date and a text value for the quarter the date falls within respectively; my SQL is like this:
select
GLC_DATE2CAL_QRT_STARTDATE_FN(s.work_week_end_date) cyquarter_start_date,
GLC_DATE2CAL_QRTYR_FN(s.work_week_end_date) cyquarter_text,
s.ethnicity ethnicity_code,
et.description ethnicity_desc,
count( unique employee_id ) number_employees
from cpr_employees_snapshot s, ct_vendor_ethnicities et
and trim(s.ethnicity) = et.ethnicity_id
group by GLC_DATE2CAL_QRT_STARTDATE_FN(s.work_week_end_date), GLC_DATE2CAL_QRTYR_FN(s.work_week_end_date), s.ethnicity, et.description
this takes about 1 1/2 minutes to retrieve the data
when I do not use the functions and run this SQL:
select
s.work_week_end_date,
s.ethnicity ethnicity_code,
et.description ethnicity_desc,
count( unique employee_id ) number_employees
from cpr_employees_snapshot s, ct_vendor_ethnicities et
and trim(s.ethnicity) = et.ethnicity_id
group by s.work_week_end_date, s.ethnicity, et.description
it takes 7 seconds.Well I was successful in writing a case statement that works as a select and it reduces the retrieval time to 5 seconds, the problem now is that when I create a view with it the view is created but with compilation errors; if the select work without errors I don;t know why that is happening. Here is the create view SQL:
CREATE OR REPLACE FORCE VIEW GLC_WORKER_ETHNICITY_VW
cyquarter_start_date,
cyquarter_text,
cyquarter_end_date,
ethnicity_code,
ethnicity_desc,
number_employees
AS
select
case to_number(to_char(s.work_week_end_date, 'mm'))
when 1 then to_date('1/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 2 then to_date('1/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 3 then to_date('1/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 4 then to_date('4/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 5 then to_date('4/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 6 then to_date('4/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 7 then to_date('7/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 8 then to_date('7/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 9 then to_date('7/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 10 then to_date('10/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 11 then to_date('10/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
when 12 then to_date('10/1/' || to_char(s.work_week_end_date, 'yyyy'),'mm/dd/yyyy')
end cyquarter_start_date,
'Q' || to_char(s.work_week_end_date, 'Q') || ' CY ' || to_char(s.work_week_end_date, 'yyyy') cyquarter_text,
s.ethnicity ethnicity_code,
et.description ethnicity_desc,
count( unique employee_id ) number_employees
from cpr_employees_snapshot s, ct_vendor_ethnicities et
where package_id = 727260
and trim(s.ethnicity) = et.ethnicity_id
group by s.work_week_end_date, s.ethnicity, et.description -
My spreadsheet looks like this:
Monday Tuesday Wednesday
Name 1 OFF 4:30 PM 4:30 PM
Name 2 5 PM OFF 4:30 PM
Name 3 4:30 PM 5 PM OFF
Name 4 4 PM OFF OFF
I would like to create a spreadsheet for each day that will display the values sorted by time, as follows (e.g. Monday):
Name In Time
Name 4 4 PM
Name 3 4:30 PM
Name 2 5 PM
Any help would be greatly appreciated. Thanks!Here's an example, using the provided data:
I've set the alignment on Main to Automatic (except for row 1) to distinguish between numeric and quasi numeric values (aligned right) and text (aligned left). This is a visual aid to developing the table, and would likely be changed for appearance in the end version.
Columns E, F and G of Main are index columns listing the RANK of numeric/date and time values in columns A, B and C respectively. Text values cause RANK to throw an error, which is caught by IFERROR, which returns a value of 999, chosen to be well above any of the RANK values returned. A small amount ( ROW()/100000 ) is added to each result to prevent duplicate results is cases like column D, where duplicate times appear.
Formula: Main::E2: =IFERROR(RANK(B2,B,1),999)+ROW()/100000
Fill down the column, and right to column G.
These columns may be hidden.
The three daily columns use a single formula each, revised to match the index columns from which they determine the row containing each piece of data to be copied, and to match the columns from which they retrieve that data. The formulas from row 2 of these tables are listed here in the order (left to right) that they are used in the second row of tables above. Parts that are edited from one formula to another are shown in bold.
=IF(SMALL(Main :: $E,ROW()-1)>999,"",OFFSET(Main :: $A$1,MATCH(SMALL(Main::$E,ROW()-1),Main :: $E,0)-1,0))
=IF(SMALL(Main :: $E,ROW()-1)>999,"",OFFSET(Main :: $A$1,MATCH(SMALL(Main::$E,ROW()-1),Main :: $E,0)-1,1))
=IF(SMALL(Main :: $F,ROW()-1)>999,"",OFFSET(Main :: $A$1,MATCH(SMALL(Main::$F,ROW()-1),Main :: $F,0)-1,0))
=IF(SMALL(Main :: $F,ROW()-1)>999,"",OFFSET(Main :: $A$1,MATCH(SMALL(Main::$F,ROW()-1),Main :: $F,0)-1,2))
=IF(SMALL(Main :: $G,ROW()-1)>999,"",OFFSET(Main :: $A$1,MATCH(SMALL(Main::$G,ROW()-1),Main :: $G,0)-1,0))
=IF(SMALL(Main :: $G,ROW()-1)>999,"",OFFSET(Main :: $A$1,MATCH(SMALL(Main::$G,ROW()-1),Main :: $G,0)-1,3))
Each of the formulas is filled down its column.
Each of the functions used is described in the iWork Formulas and Functions User Guide, a useful resource to have on hand when you are writing (or attempting to 'decode') Numbers formulas, The guide (and the Numbers '09 User Guide) may be downloaded from the Help menu in Numbers '09.
Regards,
Barry -
How do I match a clip to sequence settings and sequence presets?
I'm getting the warning: Attention - This clip does not match this sequence's settings or any or our sequence presets.
I'm trying to figure out how to set things up in Compressor so that my clips do match my sequence settings and presets. The issue seems to be with the audio not matching.
Here are the sequence settings:
Audio-2 Outputs
Frame Size- 1920x1080
Vid Rate-24fps
Compressor-Apple ProRes 422 (proxy)
Aud Rate 48.0 KHz
Aud Format 32-bit floating Point
I shot all the footage with a Canon 5D Mark II. I'm using Compressor to convert all my original clips. The only thing I don't seem to be able to match is the Audio. I can't find any way in Compressor to encode clips that have "two outputs" in the Audio column.
The dialogue box for "Sound Settings" in Compressor doesn't offer any sort of options, that I can find anyway, to generate clips with "two outputs."These are the Compressor settings:
Description: Apple ProRes 422 10-bit video with audio pass-through. Settings based off the source resolution and frame-rate.
File Extension: mov
Estimated size: 16.36 GB/hour of source
Audio Encoder
Apple Lossless, Stereo (L R), 48.000 kHz
Video Encoder
Format: QT
Width: (100% of source)
Height: (100% of source)
Selected: 1920 x 1080
Pixel aspect ratio: Square
Crop: None
Padding: None
Frame rate: (100% of source)
Selected: 24
Frame Controls: Automatically selected: Off
Codec Type: Apple ProRes 422 (Proxy)
Multi-pass: Off, frame reorder: Off
Automatic gamma correction
Progressive
Pixel depth: 24
Spatial quality: 50
Min. Spatial quality: 0
Temporal quality: 0
Min. temporal quality: 0
These are the sequence settings:
Frame Size: 1920 x 1080
Editing Timebase: 24fps
Field Dominance: None
Pixel Aspect Ration: Square
Anamorphic 16:9: Off
Video Processing YUV allowed (8-bit)
Compressor: Apple ProRes 422 (Proxy)
Millions of Colors (24 bit)
No Data Rate Limit
No Keyframes Set
Quality: 100
Audio Settings:
16-bit 48.000 kHz Stereo
I made screen shots, but don't see how to paste them into this post. -
Hi,
I have a table t1 which has nearly 20000 rows. It is accessed by a query which has 3 columns in its where clause lets say col1, col2 and col3. the table doesn't have any index so it does a full table scan. Now if I put an index on all the 3 columns it uses the index and avoids the FTS. Also if I index just 2 columns instead of three then also it uses index (something that I don't know why). my question is should I index the two coulmns or three columns? the third column which i left out has nearly 8000 distinct values.
the query is of the form:
select col5 from table1 where col1=value1 and col2=value2 and col3=value3 and rownum=1
the execution plan with index on three columns is:
Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
0 | SELECT STATEMENT | | 1 | 24 | 2 (0)| 00:00:01 |
* 1 | COUNT STOPKEY | | | | | |
* 2 | TABLE ACCESS BY INDEX ROWID| table1 | 1 | 24 | 2 (0)| 00:00:01 |
* 3 | INDEX RANGE SCAN | ind-3col | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("col1"=6003 AND "col2"=1532 AND
"col3"=267)
the execution plan with index on two columns is:
======
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 24 | 2 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| table1 | 1 | 24 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | ind-2col | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM=1)
2 - filter("col3"=267)
3 - access("col1"=6003 AND "col2"=1532)
I don't know if I should index on 2 columns or 3 columns...can someone suggest?
Thanks
Edited by: orausern on Feb 8, 2010 5:28 AM
Edited by: orausern on Feb 8, 2010 5:29 AMHi,
To index or not to index on columns would totally depends on type of query(update/delete/select) being run on that table.
So you need to check all/majority of the query that would be running on this table to decide which column should be indexed.
eg:
If i have just one query which you posted that would be running against this table then I would have index on all the three columns.
But only 10-20% of the query would be making use of all the three columns, and rest 70-80% of the queries are making use of same 2 out of these 3 columns then I would be indexing only 2 columns.
I suppose you would also like to understand that queries also determines which column would be leading column in index.
Index makes select queries to run faster in most of the case, but it also makes insert statements run slower and utilized space. So keeping all this factor in mind it should be decided on which columns and in which order should we index.
Regards
Anurag -
ALV with dropdown in column and other questions
hi,
I created an editable alv with configuration model via external context mapping.
now I have a column which contains strings and my aim is to show this whole column as a dropdownlist. the possible values of this dropdown are fetched in a database table at runtime and each cell in this column got its own selected value in this dropdownlist.
changes of the selected values should be written in the database table in a later step.
1. what changes do I have to do?
should I remove my external mapping concerning the alv and use the setData method? or what is the easiest way to get my dropdownlist in this column instead of normal text (string) ?
2. should I choose dropdownbykey or by index?
I suppose I have to change the context when changing the selected value in the dropdownlist and the read out the changed context element and update my database table?
3. how could I access rows (= tuples) in an alv?
4. how can I avoid that a row could be deleted, inserted or added with the normal functions of the alv?Hi Thorsten,
If the possible list of values in the dropdown is different for each row, you need a drop down by index. Else this can be done using a dropdown by key itself.
The only change that you need to do is create a cell editor of type dropdown for that particular column and configure your ALV model.
The sample code for changing the cell editor would be:
l_alv_model = l_ref_interfacecontroller->get_model( ).
DATA:
lr_column_settings TYPE REF TO if_salv_wd_column_settings,
lr_column type ref to CL_SALV_WD_COLUMN.
lr_dropdown TYPE REF TO CL_SALV_WD_UIE_DROPDOWN_BY_KEY.
lr_column_settings ?= l_alv_model.
lr_column = lr_column_settings->get_column( <column name> ).
CREATE OBJECT lr_dropdown EXPORTING
SELECTED_KEY_FIELDNAME= '<fieldname>'
lr_column->set_cell_editor( lr_dropdown ).
Here, you can substitute the field name with your field name that has to be displayed as a dropdown.
The dropdown list can be populated in the wddoinit method by attaching a value set to the attribute in the context node info. Everything else will be like you do in a normal table. You can so a get_static_attributes of your elements and persist in tables. Whenever you change the selected value, the context will be updated immediately. You can have a user defined function like 'Save' or something where you can just read the context and process the data.
If you do not want to display any of the normal functions, you can do the following:
data: lt_functions type SALV_WD_T_FUNCTION_STD_REF,
ls_function type SALV_WD_s_FUNCTION_STD_REF.
lt_functions = l_alv_model->if_salv_wd_function_settings~get_functions_std( ).
loop at lt_functions into ls_function.
ls_function-r_function->set_visible( CL_WD_UIELEMENT=>E_VISIBLE-NONE ).
endloop.
The above code will hide all the standard functions. If you want to hide specific ones, do a get_function(id) and set_visible.
Hope this helps.
Regards
Nithya
Maybe you are looking for
-
Query View name not saved in "Analysis Grid Properties" under Data Provider
Hi BW World;) We are on BI 7.0 I have created a BI workbook which contains a query view. However when we go into design mode and then "analysis grid properties" for this query view then "change data provider" it does not show the query view name but
-
A few questions about MacBooks and Parallels Desktop.
I have a few questions about MacBooks and Parallels Desktop. 1) I understand I need at least 1GB of RAM to run Parallels Desktop but what about the hard drive, is the stock 60GB drive big enough? 2) Related to question 1, even if it was big enough to
-
Icons not showing up in any default applications.
The icons and text aren't showing up in any default Mac OS X app's toolbars. http://imgur.com/a/G8fVt#EkVjVFo How do I fix this? I've tried restarting my Mac and restarting the apps. I've got Mountain Lion (10.8.3) installed.
-
External Backup Drive Hangs, cannot be read
I was trying to backup my Pictures folder to an external, 120 Gb Western Digital Drive using SilverKeeper; over a FireWire cable, when the operation hung. I force-restarted, and on reaching the Desktop, a dialog told me that the disk I'd inserted cou
-
Multiple sockets connected to one port
Hello I am running a LabVIEW VI that is based on the Labview TCPServer example. I am wondering what the best way to deal with multiple connections to a single port would be. For example I have a device(could be thought of as the same thing as the TCP