Partial indexing
I have what seems to be a slightly different problem with Spotlight (the main reason I spent good money on Tiger), in that it's only indexing about the first half of longish Word documents. I need it for keyword searching of content, but while it will find what I'm looking for in the first part of a document, after that it finds nothing. I've run checks on several documents, and the same thing is happening with most of them. I've re-indexed the hard drive twice by putting it into and out of Privacy, but it has made no difference. When it starts indexing, an estimate of 3 hours is given for the process, falling almost immediately to around 15 minutes, which is more or less how long it takes. It seems to get bored half way through a document (can't blame it really, I do too) and dash on to the next one. Any ideas?
Sarah
Spotlight has a limit on just how much content actually gets indexed. Don't remember just what the limit is, all I can say is that I have a plain text version of the entire King James Bible, and if I type the word Jesus into Spotlight it finds the file, so it has indexed the content all the way into the New Testament. I wonder if the formatting of Word docs is a problem? Anyway, you might try splitting your "longish" docs into two files and see if the second half, as a separate file, then gets its content indexed. By the way, reindexing the drive will probably slow down how much content has gotten indexed, Spotlight seems to chug away at content indexing for some time after it is "finished" with indexing. Some people have guessed that getting all the content truly indexed on a biggish drive can take as long as a week.
Francine
Schwieder
Similar Messages
-
Hi Guys,
I know this isn't there in Oracle.. But want to clarify the matter
Is there anything in oracle 9i and 10g like partial indexes ( like postgre sql ). I'm trying to ensure that the ab_tran_info.value field is unique within some ab_tran_info.type_id, while allowing non-uniqueness for other type_ids. Can this be done?OPS$OSKAR@test10g>drop table ab_tran_info;
Table dropped.
OPS$OSKAR@test10g>
OPS$OSKAR@test10g>create table ab_tran_info (type_id number, value number);
Table created.
OPS$OSKAR@test10g>
OPS$OSKAR@test10g>create unique index ix_f on ab_tran_info (case when type_id in (1,3)
2 then value end);
Index created.
OPS$OSKAR@test10g>
OPS$OSKAR@test10g>insert into ab_tran_info values (1,1);
1 row created.
OPS$OSKAR@test10g>insert into ab_tran_info values (2,1);
1 row created.
OPS$OSKAR@test10g>insert into ab_tran_info values (3,1);
insert into ab_tran_info values (3,1)
ERROR at line 1:
ORA-00001: unique constraint (OPS$OSKAR.IX_F) violated
OPS$OSKAR@test10g>insert into ab_tran_info values (4,1);
1 row created. -
Hello all, I've recently developed a problem with Spotlight and it's ability to find files that I've tagged. I've been using Quicksilver to try and create on-the-fly smart folders for a specific tag, &project1, for instance. However, my first attempts were futile until I re-indexed Spotlight. After doing this, I was able to find some files with the &project1 tag, however, if I created new tags for files, Spotlight would not find them or even acknowledge that they existed unless I re-indexed again. This renders Spotlight almost useless for making smart folders quickly, as I have to constantly re-index my HD to find anything by tags. So my question is, does anyone have any idea what's going wrong? And more importantly, does anyone have any idea as to how to fix it? Please respond.
DanIn Disk Utility, you need to run the Repair Disk function (not Repair Disk Permissions).
In order to do so on your main hard disk, you need to start the computer from the Tiger install DVD.
1. Insert DVD
2. Shut down computer
3. Turn on computer while holding 'C' key until you get to the grey spinner screen.
4. Select the language
5. Go to the Menubar- Utilities>DiskUtility
6. Run the Repair Disk routine.
-If it reports any errors, run it a second time to get the all OK
-If it reports errors that it can't fix, you may need an alternative application.
7. Then quit the installer, set your startup disk to the hard disk, and restart the computer. -
Partial indexing of file names
There are a great many files (actually file names) that Spotlight cannot find. (Easy Find can find them.) If I open these files and then close them, Spotlight indexes them and can subsequently find them.
Is there a way to re-index my HD?I have been trying to pinpoint problems on my G5 (dual processor with 2 GB RAM) that began when Software Update installed OS X 10.4.3. The problem has had the following symptoms:
a. When I click on a file in the finder of any type in order to select it, Finder will BEACHBALL for several minutes but will ultimately return so that I can select the file and perform the desired actions. Examples include all movies, music files, NoteTaker files, and applications such as Final Cut Pro, DVD Studio Pro, and, often, iMovie HD. It has also happened on Word files.
I ran Activity Monitor all day yesterday and noticed that Finder (or sometimes the Application that is using a file import routine that I believe to be related to Finder, such as iPhoto or iMovie), will NOT RESPOND (appear in Red) for several minutes, mds will always be at about 99-100%, and, suddenly the Not Responding application will find its soul and start responding. Frustrating!
b. The same thing happens every time I try to import into iPhoto or iTunes.
c. I noted that Spotlight NEVER returns results.
d. I ultimately re-installed the OS X with the option to restore my network and user settings. Nothing changed.
Yesterday I reinstalled again with the option that wipes the drive clean. Nothing changed. In both cases I updated to 10.4.3 via Software Update.
e. I tried to force Spotlight index updating via the GUI method, but, when I click the ADD button and select a file to ADD via Privacy tab, nothing happens. The file is not added to the list. So I just ran the Terminal method, described by Dr. Smoke, below.
I received the following message each time I ran this command: "Error, no index found for volume".
It appears to me that there is common problem underlying these symptoms in that FInder and Spotlight seem to thrash whenever they have to perform a search or import function. I am totally at a loss on this. I have a brand new PB 15 and a PB 17 both running 10.4.3 as well as four other iMac G4's, and known of them have this problem. HELP!
Hi, J.
Two approaches to reindexing your startup disk:
1. The GUI way: see "Spotlight: How to re-index files and
folders."
2. The Terminal way:2.1. In terminal,
type the following command exactly as
written:
sudo mdutil -E /
then press Return.
2.2. Type your Admin password when prompted, then
press Return.
2.3. You'll receive a confirmation message that the
index will be rebuilt automatically. Indexing will
begin shortly thereafter.I prefer the
Terminal approach over the GUI (Privacy) approach as
there are some anomalies assocaited with using
Privacy. You can run into anomalies in Tiger if you
combine Privacy with mdutil. For details,
see my "Stop Spotlight Indexing" FAQ.
Good luck!
Dr. Smoke
Author: Troubleshooting Mac® OS X
Note: The information provided in the link(s) above
is freely available. However, because I own The X
Lab™, a commercial Web site to which some of these
links point, the Apple Discussions Terms of Use require I include the following
disclosure statement with this post:
I may receive some form of compensation, financial or
otherwise, from my recommendation or link.
G5 2 x 2 and PB 15 Mac OS X (10.4.3) -
Oracle evolution suggestion : NULL and Index
Hello,
As you know in most of case NULL values are not indexed and then not searchable using index.
Then when you do where MyField is null, you have the risk of doing a full table scan
However most of people don't know that, and then doesn't care of this possible issue and of possible solution (bitmap or including a not null column)
SQL Server, MySQL and probably some others DB don't have the same behavior as they indexes NULL
I know this caveat can be used to have partial indexing by nulling non interesting values and then can't be removed
Then I would suggest to enhance the create index command to allow to index also null with something like that
Create index MyIndex on MyTable(MyColumn including nulls )
As you make this change, perhaps it would be geat to change the behavior documented bellow as it looks more as an old heritage too by adding keyword like "allow null duplicate" and "constraint on null duplicate"
Ascending unique indexes allow multiple NULL values. However, in descending unique indexes, multiple NULL values are treated as duplicate values and therefore are not permitted.
LaurentHello,
Thanks, for the links it cover mains solutions to index null values, there's also the usage of bitmap index.
All of them are not very intuitive for an non expert.
But the purpose of my message was mainly to higlight this complexity for a quite basic stuff, as I think that the default solution should be to index nulls and eventually allow to do not index them.
As I said this is the behavior on sql server and mysql. That why i suggest to enhance index behavior to allow to index nulls easily and not by using stange tips like indexing a blank space or a not null column.
This solutions are from my viewpoint workaround, helpfull workaround but still workaround, Oracle database team have the power to change this root cause without breaking ascending compatibility, here is the sense of my message, just hopping they can hear me...
Laurent -
Hi ,
We have a catalog that defines 2 types of products (they have too many different properties), so wanted to keep them on two different MDEX engines and serve the applications requests. Here DB catalog and front end ATG application is same for both the MDEX instances.
Is it possible to have 2 different output config XML files and index the data into 2 endeca apps using the same indexing component ProductCatalogSimpleIndexingAdmin?
Thanks
DevHi, also have had some problem some monthes ago - I created separete component ProductCatalogSimpleIndexingAdminSecond. After that one of my colleage gave me some advice:
The creating separate component like ProductCatalogSimpleIndexingAdmin for the second IOC is possible way for resolving your situation. But I afraid that this way will be required creating mane duplicates for already existed components.
In my opinion the better way is the following:
starting from AssemblerApplicationConfiguration and ApplicationConfiguration component. It contains details for connecting between ATG and Endeca. Of course you should configure different components for different Endeca Apps.
After that:
Find all components that uses AssemblerApplicationConfiguration and ApplicationConfiguration. Customize these components for using one or another *Configuration component depending on what index works. (many variants released it: the most simple global custom component with flag.)
Then customize the existed ProductCatalogSimpleIndexingAdmin. Using one or another IOC and setting the flag in global custom component when index started. You can add some methods into your custom ProductCatalogSimpleIndexingAdmin like:
Execute baseline index for both IOC (one by one)
Execute baseline for IOC 1
Execute baseline for IOC 2.
Note: you should be afraid about incremental (partial) index in this configuration. But resolving conflicts in incremental index should be done after full implementation these changes.
Regards -
InDesign CS3 Crashed when making Index from Book
Hi,
I'm trying to make an Index from a book, there are 25 documents and it's about 2300 pages in all - HUGE-. Unfortunately I can't tell which of the documents is causing my crashes, the processing windows flash by too quickly to be able to read.
I've gone through all of my documents had have checked the "update Preview" icon at the bottom of the Index pallet. I've read forums that are saying that it may be as simple as having my workspace on "basic" and not customized, I'm doing that.
I'm at the point where I'm adding a document at a time to my book and generating the index again and again and agian.....to try to narrow down the "problem child" document.
I've been dealing with this for days now, if anyone has any suggestions, I'm all ears! :)
Thanks!I've been having the same problem with my book (32 documents besides the index, 545 pages) ever since yesterday, when I went from two partially indexed chapters to five partially indexed chapters. I spent about 10 hours yesterday creating topics, references, and cross-references, but ever since ID crashed, the index tool window shows only a small fraction of the topics I've *ever* created for this index are even showing up.
I'll do the same as Alexa and, working on the assumption that this is being caused by a corrupted chapter/document, will systematically isolate all documents until I find the one causing the crash. I'll report back on what I learn, but I'd really like to hear from anyone in the same boat (or even better, anyone who *was* in the same boat but found a way to stop the crashes and maybe even recover all the index elements).
Thanks in advance,
Michael -
ORDER BY on large VARCHAR column
The database driver I am using does not allow me to set an index on a large VARCHAR field. Anyone have any tips for speeding up an ORDER BY on this column for a very large table? There must be some standard tricks out there but I'm having some trouble finding them. Currently, for something like 300,000 records, a simple query without an order by takes a few milliseconds. The same query ORDERed by this VARCHAR column takes a few minutes.
I was thinking of adding a new colum, a LONG called something like NAME_ORDER. Each time I insert a new record, I would search for the record that comes before the new one (decided using a COMPARE like function) and then either make ther NAME_ORDER a value between the previous record and the next record, or if there is no room left, make it the previous record plus on and increment all the following records.
Wow, this sounds like a drag. Anyone have any better ideas? And no I can't shorten or truncate the VARCHAR columns.
TIA!how much data are you selecting from this table at one time? (not all of it I hope... )
there are two situations i can think of causing your problem...
1) you are selecting 300,000 rows and doing an ORDER on that... even with an index that's might well take a long time...
2) generally the number of rows in the table won't matter it's how many you return. but the other things that make a slow query are ORDER BY in conjunction with any columns that are functions (such as SUM, COUNT, AVG) or any GROUP BY clause.
For example this...
SELECT x,COUNT(*) FROM table GROUP BY xmight well be performance wise better (by an exponential rate) than
SELECT x,COUNT(*) as thecount FROM table GROUP BY x ORDER BY thecount;IMO generally speaking ORDER BY is one of the worst performance things you can do with a database if the field you are sorting isn't indexed.
so here are my tips...
1) Take a look at your query... do you filter out some rows or are you selecting all 300,000? Does your query have functions or GROUP BY statements that are killing your speed?
2) If you are using functions or GROUP BY (and you have to use them) try and use a temporary table and sort afterwards... this may actually be faster.
3) Try and at least build a partial index on the field... it may well be good enough. most databases will let you do this on VARCHAR fields... the idea is that the index is just on the first 50 char's or whatnot. -
Simple query performance problem
Hey!
I'm using two simple XQUpdate queries in my wholedoc container.
a) insert nodes <node name="my_name"/> as last into collection('xml_content.dbxml')[dbxml:metadata('dbxml:name')='document.xml']/nodes[1]
b) delete node collection('xml_content.dbxml')[dbxml:metadata('dbxml:name')='document.xml']/nodes[1]/node[@name='my_name'][last()]
The queries are operating on the same document.
1) First a bunch of 'insert' queries has been executed (ca.50),
2) Then a bunch of delete queries (ca. 50).
The attribute name of element node varies.
After a couple of iterations 1) and 2) each XQUpdate statement takes a lot of time to be completed (ca. 5-10 secs, whereas before it took much less then a second).
The number of node elements in nodes element never exceeded 50. And eventually it works very slow even with 2 node elements.
Does anybody have an idea what goes wrong after certain number of queries? What are the possible solutions here? How can I examine what is wrong?
I didn't find relevant information in DB XML docs. Maybe I should look at BDB docs?
Thanks in advance,
VyacheslavHere is a patch to fix the problem in 2.4.16. Note that the slowdown that this patch fixes only applies to whole document containers.
Lauren Foutz
diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.cpp dbxml-2.4.16/dbxml/src/dbxml/Indexer.cpp
--- dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.cpp 2008-10-21 17:27:22.000000000 -0400
+++ dbxml-2.4.16/dbxml/src/dbxml/Indexer.cpp 2009-04-27 14:06:40.000000000 -0400
@@ -477,7 +477,8 @@
if(updateStats_) {
// Get the size of the node
size_t nodeSize = 0;
- if(ninfo != 0) {
+ // Node size is kept only for node containers
+ if(ninfo != 0 && container_->isNodeContainer()) {
const NsFormat &fmt =
NsFormat::getFormat(NS_PROTOCOL_VERSION);
nodeSize = ninfo->getNodeDataSize();
@@ -487,18 +488,22 @@
0, /*count*/true);
- // Store the node stats for this node
+ /* Store the node stats for this node, only the descendants
+ * of the node being partially indexed are being removed/added
+ */
StructuralStats *cstats = &cis->stats[0];
- cstats->numberOfNodes_ = 1;
+ cstats->numberOfNodes_ = this->getStatsNumberOfNodes(ninfo);
cstats->sumSize_ = nodeSize;
// Increment the descendant stats in the parent
StructuralStats *pstats = 0;
if (pis) {
pstats = &pis->stats[0];
- pstats->sumChildSize_ += nodeSize;
- pstats->sumDescendantSize_ +=
- nodeSize + cstats->sumDescendantSize_;
+ if (container_->isNodeContainer()) {
+ pstats->sumChildSize_ += nodeSize;
+ pstats->sumDescendantSize_ +=
+ nodeSize + cstats->sumDescendantSize_;
+ }
pstats = &pis->stats[k.getID1()];
pstats->sumNumberOfChildren_ += 1;
diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.hpp dbxml-2.4.16/dbxml/src/dbxml/Indexer.hpp
--- dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.hpp 2008-10-21 17:27:18.000000000 -0400
+++ dbxml-2.4.16/dbxml/src/dbxml/Indexer.hpp 2009-04-27 14:08:20.000000000 -0400
@@ -19,6 +19,7 @@
#include "OperationContext.hpp"
#include "KeyStash.hpp"
#include "StructuralStatsDatabase.hpp"
+#include "nodeStore/NsNode.hpp"
namespace DbXml
@@ -181,6 +182,8 @@
void checkUniqueConstraint(const Key &key);
void addIDForString(const unsigned char *strng);
+
+ virtual int64_t getStatsNumberOfNodes(const IndexNodeInfo *ninfo) const { return 1; }
protected:
// The operation context within which the index keys are added
diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.cpp dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.cpp
--- dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.cpp 2008-10-21 17:27:22.000000000 -0400
+++ dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.cpp 2009-04-27 14:04:42.000000000 -0400
@@ -103,6 +103,7 @@
const DocID &did = document_.getID();
DbWrapper &db = *document_.getDocDb();
ElementIndexList nodes(*this);
+ partialIndexNode_ = node->getNid();
do {
bool hasValueIndex = false;
bool hasEdgePresenceIndex = false;
@@ -124,6 +125,7 @@
nodes.generate(*this);
+ partialIndexNode_ = 0;
return ancestorHasValueIndex;
@@ -203,6 +205,19 @@
+
+int64_t NsReindexer::getStatsNumberOfNodes(IndexNodeInfo *ninfo) const
+{
+ /* Get the number of this node being removed or added, only the descendants
+ * of the node being partially indexed are being removed/added
+ */
+ DBXML_ASSERT(!partialIndexNode_ || (ninfo != 0));
+ if (!partialIndexNode_ || (partialIndexNode_.compareNids(ninfo->getNodeID()) < 0)) {
+ return 1;
+ }
+ return 0;
+}
+
const char *NsReindexer::lookupUri(int uriIndex)
diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.hpp dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.hpp
--- dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.hpp 2008-10-21 17:27:18.000000000 -0400
+++ dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.hpp 2009-04-27 14:09:04.000000000 -0400
@@ -45,6 +45,7 @@
const char *lookupUri(int uriIndex);
void indexAttribute(const char *aname, int auri,
NsNodeRef &parent, int index);
+ virtual int64_t getStatsNumberOfNodes(IndexNodeInfo *ninfo) const;
private:
IndexSpecification is_;
KeyStash stash_;
@@ -54,6 +55,9 @@
// this is redundant wrt Indexer, but dict_ in Indexer triggers
// behavior that this class does not want
DictionaryDatabase *dictionary_;
+
+ // The node being indexed in partial indexing
+ NsNid partialIndexNode_;
}Edited by: LaurenFoutz on May 1, 2009 5:47 AM -
Partial insert with secondary index
I'd like to use partial inserts (DB_DBT_PARTIAL).
However, I also need secondary indexes. Now when I insert partial data, the secondary index function is immediately called, but still I do not have enough data to calculate the secondary key. I tried to supply some app_data do the DBT, but it does not reach the secondary index function.
Is there any way I can use partial inserts together with secondary indexes?When writing a partial record to the primary database, here's what we do:
1. Look up the existing record via get()
2. Using the existing record and the partial DBT, construct the new record
3. Pass that newly constructed record to the secondary key generation function
You don't need to do anything special, your callback will receive the full record, even though you passed us a partial DBT. If that's not the case, there's a bug in BDB and we'd need to see your code.
app_data is a private field, you should not try to use it or expect it to work predictably. In this situation, we're passing a brand new DBT to the callback function, not the DBT you gave us. That's why app_data is empty.
Thanks,
Bogdan Coman -
Help with creating oracle text index on 2 columns with partial html data
Hi,
I need to create an oracle text index on 2 columns.
TITLE - varchar(255) = contains plain text data
DESCRIPTION - CLOB = contains partial HTML data
This is what I created.
begin
ctx_ddl.create_preference ('Title_Description_Pref', 'MULTI_COLUMN_DATASTORE');
ctx_ddl.set_attribute('Title_Description_Pref', 'columns', 'TITLE, DESCRIPTION');
end;
begin
ctx_ddl.create_preference ('bid_lexer', 'BASIC_LEXER');
ctx_ddl.set_attribute('bid_lexer', 'index_stems', 'ENGLISH');
ctx_ddl.create_section_group('htmgroup', 'HTML_SECTION_GROUP');
end;
create index Bid_Title_Index on Bid(title) indextype is ctxsys.context parameters ('LEXER bid_lexer sync (every "sysdate+(1/24)")');
create index Bid_Title_Desc_Index on Bid(description) indextype is ctxsys.context parameters ('LEXER bid_lexer DATASTORE Title_Description_Pref sync (every "sysdate+(1/24)") filter ctxsys.null_filter section group htmgroup');
The problem is when I do a CONTAINS(description, '$(auction)')>0. I get results where the descriptions have the "auction" word (which is correct). But, the results also returned rows where the search word is inside an IMG tag. e.g. <img src="http://auction.de/120483" alt="Auction Logo"/>.
What I would like is to exclude rows where the search word is inside HTML tag attributes, results expected are rows having <a>Auction</a> or <p>For Auction</p> ... etc. Basically stripping the html tags and leave the text contents.
I'd appreciate some input.
Thanks,
AmielHi,
I need to create an oracle text index on 2 columns.
TITLE - varchar(255) = contains plain text data
DESCRIPTION - CLOB = contains partial HTML data
This is what I created.
begin
ctx_ddl.create_preference ('Title_Description_Pref', 'MULTI_COLUMN_DATASTORE');
ctx_ddl.set_attribute('Title_Description_Pref', 'columns', 'TITLE, DESCRIPTION');
end;
begin
ctx_ddl.create_preference ('bid_lexer', 'BASIC_LEXER');
ctx_ddl.set_attribute('bid_lexer', 'index_stems', 'ENGLISH');
ctx_ddl.create_section_group('htmgroup', 'HTML_SECTION_GROUP');
end;
create index Bid_Title_Index on Bid(title) indextype is ctxsys.context parameters ('LEXER bid_lexer sync (every "sysdate+(1/24)")');
create index Bid_Title_Desc_Index on Bid(description) indextype is ctxsys.context parameters ('LEXER bid_lexer DATASTORE Title_Description_Pref sync (every "sysdate+(1/24)") filter ctxsys.null_filter section group htmgroup');
The problem is when I do a CONTAINS(description, '$(auction)')>0. I get results where the descriptions have the "auction" word (which is correct). But, the results also returned rows where the search word is inside an IMG tag. e.g. <img src="http://auction.de/120483" alt="Auction Logo"/>.
What I would like is to exclude rows where the search word is inside HTML tag attributes, results expected are rows having <a>Auction</a> or <p>For Auction</p> ... etc. Basically stripping the html tags and leave the text contents.
I'd appreciate some input.
Thanks,
Amiel -
Partial fields of Secondary index being used by the DB Optimizer
Hello,
I have written the following Query to select an std. SAP Index from GLPCA~1. However, when i run the SQL trace although the index is selected by the DB optimizer the results say that only 3 matching columns were used.
bold Index GLPCA~1. bold
KOKRS
RYEAR
RPRCTR
RVERS
RACCT
SELECT rldnr
rrcty
rvers
ryear
rtcur
rpmax
rbukrs
rprctr
rfarea
kokrs
racct
hslvt
hsl01
hsl02
hsl03
hsl04
hsl05
hsl06
hsl07
hsl08
hsl09
hsl10
hsl11
hsl12
kslvt
ksl01
ksl02
ksl03
ksl04
ksl05
ksl06
ksl07
ksl08
ksl09
ksl10
ksl11
ksl12
FROM glpct
INTO TABLE i_glpct
WHERE kokrs = 'BFS'
AND ryear = p_gjahr
AND rprctr IN r_prctr
AND rvers = '000'
AND racct IN r_acct.
Now i am not sure which of the above 3 fields of the where condition are being selected but probably KOKRS and RVERS are not being used by the optimizer.
Any pointers on how to make the optimizer utilize all 5 fields would be greatly appreciated.
Thanks,
Minhaj.Hello,
I have written the following Query to select an std. SAP Index from GLPCA~1. However, when i run the SQL trace although the index is selected by the DB optimizer the results say that only 3 matching columns were used.
bold Index GLPCA~1. bold
KOKRS
RYEAR
RPRCTR
RVERS
RACCT
SELECT rldnr
rrcty
rvers
ryear
rtcur
rpmax
rbukrs
rprctr
rfarea
kokrs
racct
hslvt
hsl01
hsl02
hsl03
hsl04
hsl05
hsl06
hsl07
hsl08
hsl09
hsl10
hsl11
hsl12
kslvt
ksl01
ksl02
ksl03
ksl04
ksl05
ksl06
ksl07
ksl08
ksl09
ksl10
ksl11
ksl12
FROM glpct
INTO TABLE i_glpct
WHERE kokrs = 'BFS'
AND ryear = p_gjahr
AND rprctr IN r_prctr
AND rvers = '000'
AND racct IN r_acct.
Now i am not sure which of the above 3 fields of the where condition are being selected but probably KOKRS and RVERS are not being used by the optimizer.
Any pointers on how to make the optimizer utilize all 5 fields would be greatly appreciated.
Thanks,
Minhaj. -
Auto index only shows partial data
Hi everyone,
Please see the attached file. I've added notes where the data goes missing. I know it has something to do with auto-indexing and I spent quite a bit of time reading about it and trying different things but I really need some help now please.
Thank you!
Solved!
Go to Solution.
Attachments:
For forum.vi 47 KB
For forum.xlsx 12 KBThink about what you're trying to do.
You have three arrays you're indexing. Let's say they're lengths 1, 2, and 3.
In the first iteration, we'd index 0 and get back a value from all three.
In the second iteration, we'd index 1 and get back two values and have a bounds conflict.
In the third iteration, we'd index 2 and get back a single value while having two bounds conflicts.
Anything you process in the second and third iteration is garbage. You have bad inputs so the output isn't useful to you. In some languages, this will throw an error at runtime for indexing out of bounds. Some will just let you process the garbage. In no case do you WANT to run this.
If you want to process all the elements of the array of size 3, you need to fill in values for the first to arrays to make them all have a length of 3. Once you do that, the loop will run as you desire. If you have auto-indexed tunnels with fewer elements than the constant you wire, they will override the terminal, as they should. -
Stage3D: Render vertex/index buffer partially issue
I've tried to call Context3D:drawTriangles for part of vertex/index buffers and nothing is rendered.
This can be reproduced with HelloTriangle example from here: http://www.adobe.com/devnet/flashplayer/articles/hello-triangle.html
It works fine if you allocated buffers for 3 indices and 3 vertices. But if I change
context3D.drawTriangles(indexbuffer) to context3D.drawTriangles(indexbuffer, 0, 1)
and allocate one more item either in vertex or index buffer - nothing is rendered.
Is it a bug or am I missing something?
My changes in code:
context3D.drawTriangles(indexbuffer, 0, 1);
and
vertexbuffer = context3D.createVertexBuffer(3 + 1, 6);
or
indexbuffer = context3D.createIndexBuffer(3 + 1);
Below is complete code from sample with my changes:
package {
import com.adobe.utils.AGALMiniAssembler;
import flash.display.Sprite;
import flash.display3D.Context3D;
import flash.display3D.Context3DProgramType;
import flash.display3D.Context3DVertexBufferFormat;
import flash.display3D.IndexBuffer3D;
import flash.display3D.Program3D;
import flash.display3D.VertexBuffer3D;
import flash.events.Event;
import flash.geom.Matrix3D;
import flash.geom.Rectangle;
import flash.geom.Vector3D;
import flash.utils.getTimer;
[SWF(width="800", height="600", frameRate="60", backgroundColor="#FFFFFF")]
public class VertexBufferTest extends Sprite
protected var context3D:Context3D;
protected var program:Program3D;
protected var vertexbuffer:VertexBuffer3D;
protected var indexbuffer:IndexBuffer3D;
public function VertexBufferTest()
stage.stage3Ds[0].addEventListener( Event.CONTEXT3D_CREATE, initMolehill );
stage.stage3Ds[0].requestContext3D();
addEventListener(Event.ENTER_FRAME, onRender);
protected function initMolehill(e:Event):void
context3D = stage.stage3Ds[0].context3D;
context3D.configureBackBuffer(800, 600, 1, false);
var vertices:Vector.<Number> = Vector.<Number>([
-0.3,-0.3,0, 1, 0, 0, // x, y, z, r, g, b
-0.3, 0.3, 0, 0, 1, 0,
0.3, 0.3, 0, 0, 0, 1]);
// Create VertexBuffer3D. 3 vertices, of 6 Numbers each
vertexbuffer = context3D.createVertexBuffer(3, 6);
// Upload VertexBuffer3D to GPU. Offset 0, 3 vertices
vertexbuffer.uploadFromVector(vertices, 0, 3);
var indices:Vector.<uint> = Vector.<uint>([0, 1, 2]);
// Create IndexBuffer3D. Total of 3 indices. 1 triangle of 3 vertices
indexbuffer = context3D.createIndexBuffer(3 + 1);
// Upload IndexBuffer3D to GPU. Offset 0, count 3
indexbuffer.uploadFromVector (indices, 0, 3);
var vertexShaderAssembler : AGALMiniAssembler = new AGALMiniAssembler();
vertexShaderAssembler.assemble( Context3DProgramType.VERTEX,
"m44 op, va0, vc0\n" + // pos to clipspace
"mov v0, va1" // copy color
var fragmentShaderAssembler : AGALMiniAssembler= new AGALMiniAssembler();
fragmentShaderAssembler.assemble( Context3DProgramType.FRAGMENT,
"mov oc, v0"
program = context3D.createProgram();
program.upload( vertexShaderAssembler.agalcode, fragmentShaderAssembler.agalcode);
protected function onRender(e:Event):void
if ( !context3D )
return;
context3D.clear ( 1, 1, 1, 1 );
// vertex position to attribute register 0
context3D.setVertexBufferAt (0, vertexbuffer, 0, Context3DVertexBufferFormat.FLOAT_3);
// color to attribute register 1
context3D.setVertexBufferAt(1, vertexbuffer, 3, Context3DVertexBufferFormat.FLOAT_3);
// assign shader program
context3D.setProgram(program);
var m:Matrix3D = new Matrix3D();
m.appendRotation(getTimer()/40, Vector3D.Z_AXIS);
context3D.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, m, true);
context3D.drawTriangles(indexbuffer, 0, 1);
context3D.present();I've tried to call Context3D:drawTriangles for part of vertex/index buffers and nothing is rendered.
This can be reproduced with HelloTriangle example from here: http://www.adobe.com/devnet/flashplayer/articles/hello-triangle.html
It works fine if you allocated buffers for 3 indices and 3 vertices. But if I change
context3D.drawTriangles(indexbuffer) to context3D.drawTriangles(indexbuffer, 0, 1)
and allocate one more item either in vertex or index buffer - nothing is rendered.
Is it a bug or am I missing something?
My changes in code:
context3D.drawTriangles(indexbuffer, 0, 1);
and
vertexbuffer = context3D.createVertexBuffer(3 + 1, 6);
or
indexbuffer = context3D.createIndexBuffer(3 + 1);
Below is complete code from sample with my changes:
package {
import com.adobe.utils.AGALMiniAssembler;
import flash.display.Sprite;
import flash.display3D.Context3D;
import flash.display3D.Context3DProgramType;
import flash.display3D.Context3DVertexBufferFormat;
import flash.display3D.IndexBuffer3D;
import flash.display3D.Program3D;
import flash.display3D.VertexBuffer3D;
import flash.events.Event;
import flash.geom.Matrix3D;
import flash.geom.Rectangle;
import flash.geom.Vector3D;
import flash.utils.getTimer;
[SWF(width="800", height="600", frameRate="60", backgroundColor="#FFFFFF")]
public class VertexBufferTest extends Sprite
protected var context3D:Context3D;
protected var program:Program3D;
protected var vertexbuffer:VertexBuffer3D;
protected var indexbuffer:IndexBuffer3D;
public function VertexBufferTest()
stage.stage3Ds[0].addEventListener( Event.CONTEXT3D_CREATE, initMolehill );
stage.stage3Ds[0].requestContext3D();
addEventListener(Event.ENTER_FRAME, onRender);
protected function initMolehill(e:Event):void
context3D = stage.stage3Ds[0].context3D;
context3D.configureBackBuffer(800, 600, 1, false);
var vertices:Vector.<Number> = Vector.<Number>([
-0.3,-0.3,0, 1, 0, 0, // x, y, z, r, g, b
-0.3, 0.3, 0, 0, 1, 0,
0.3, 0.3, 0, 0, 0, 1]);
// Create VertexBuffer3D. 3 vertices, of 6 Numbers each
vertexbuffer = context3D.createVertexBuffer(3, 6);
// Upload VertexBuffer3D to GPU. Offset 0, 3 vertices
vertexbuffer.uploadFromVector(vertices, 0, 3);
var indices:Vector.<uint> = Vector.<uint>([0, 1, 2]);
// Create IndexBuffer3D. Total of 3 indices. 1 triangle of 3 vertices
indexbuffer = context3D.createIndexBuffer(3 + 1);
// Upload IndexBuffer3D to GPU. Offset 0, count 3
indexbuffer.uploadFromVector (indices, 0, 3);
var vertexShaderAssembler : AGALMiniAssembler = new AGALMiniAssembler();
vertexShaderAssembler.assemble( Context3DProgramType.VERTEX,
"m44 op, va0, vc0\n" + // pos to clipspace
"mov v0, va1" // copy color
var fragmentShaderAssembler : AGALMiniAssembler= new AGALMiniAssembler();
fragmentShaderAssembler.assemble( Context3DProgramType.FRAGMENT,
"mov oc, v0"
program = context3D.createProgram();
program.upload( vertexShaderAssembler.agalcode, fragmentShaderAssembler.agalcode);
protected function onRender(e:Event):void
if ( !context3D )
return;
context3D.clear ( 1, 1, 1, 1 );
// vertex position to attribute register 0
context3D.setVertexBufferAt (0, vertexbuffer, 0, Context3DVertexBufferFormat.FLOAT_3);
// color to attribute register 1
context3D.setVertexBufferAt(1, vertexbuffer, 3, Context3DVertexBufferFormat.FLOAT_3);
// assign shader program
context3D.setProgram(program);
var m:Matrix3D = new Matrix3D();
m.appendRotation(getTimer()/40, Vector3D.Z_AXIS);
context3D.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, m, true);
context3D.drawTriangles(indexbuffer, 0, 1);
context3D.present(); -
I've Main Report + 5 sub report. All of this in a 1 file named _rptBorang.rpt.
This _rptBorang.rpt consists of
1. Main Report
2. _spupimSPMBorang.rpt
3. _spupimSTPMBorang.rpt
4. _spupimSijilDiploma.rpt
5. _spupimKoQ.rpt
6. _spupimPilihanProg.rpt
When I preview the report, the Enter values dialog box ask 7 parameters. It's
1. idx
2. tbl_MST_Pemohon_idx
3. tbl_MST_Pemohon_idx(_spupimSPMBorang.rpt)
4. tbl_MST_Pemohon_idx(_spupimSTPMBorang.rpt)
5. tbl_MST_Pemohon_idx(_spupimSijilDiploma.rpt)
6. tbl_MST_Pemohon_idx(_spupimKoQ.rpt)
7. tbl_MST_Pemohon_idx(_spupimPilihanProg.rpt)
My ASP.NET code as following,
<%@ Page Language="VB" AutoEventWireup="false" CodeFile="_cetakBorang.aspx.vb" Inherits="_cetakBorang" title="SPUPIM" %>
<%@ Register assembly="CrystalDecisions.Web, Version=13.0.2000.0, Culture=neutral, PublicKeyToken=692fbea5521e1304" namespace="CrystalDecisions.Web" tagprefix="CR" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head id="Head1" runat="server">
<title>Untitled Page</title>
</head>
<body>
<form id="form1n" runat="server">
<div align="center">
<asp:Label ID="lblMsg" runat="server" ForeColor="Red"></asp:Label>
<CR:CrystalReportViewer ID="CrystalReportViewer1" runat="server"
AutoDataBind="true" />
</div>
</form>
</body>
</html>
Imports System.configuration
Imports System.Data.SqlClient
Imports System.Web.Security
Imports CrystalDecisions.Shared
Imports CrystalDecisions.CrystalReports.Engine
Partial Class _cetakBorang
Inherits System.Web.UI.Page
Private Const PARAMETER_FIELD_NAME1 As String = "idx"
Private Const PARAMETER_FIELD_NAME2 As String = "tbl_MST_Pemohon_idx"
Private Const PARAMETER_FIELD_NAME3 As String = "tbl_MST_Pemohon_idx(_spupimSPMBorang.rpt)"
Private Const PARAMETER_FIELD_NAME4 As String = "tbl_MST_Pemohon_idx(_spupimSTPMBorang.rpt)"
Private Const PARAMETER_FIELD_NAME5 As String = "tbl_MST_Pemohon_idx(_spupimSijilDiploma.rpt)"
Private Const PARAMETER_FIELD_NAME6 As String = "tbl_MST_Pemohon_idx(_spupimKoQ.rpt)"
Private Const PARAMETER_FIELD_NAME7 As String = "tbl_MST_Pemohon_idx(_spupimPilihanProg.rpt)"
Dim myReport As New ReportDocument
'rpt connection
Public rptSvrNme As String = ConfigurationManager.AppSettings("rptSvrNme").ToString()
Public rptUsr As String = ConfigurationManager.AppSettings("rptUsr").ToString()
Public rptPwd As String = ConfigurationManager.AppSettings("rptPwd").ToString()
Public rptDB As String = ConfigurationManager.AppSettings("rptDB").ToString()
Private Sub SetCurrentValuesForParameterField(ByVal reportDocument As ReportDocument, ByVal arrayList As ArrayList, ByVal paramFieldName As String)
Dim currentParameterValues As New ParameterValues()
For Each submittedValue As Object In arrayList
Dim parameterDiscreteValue As New ParameterDiscreteValue()
parameterDiscreteValue.Value = submittedValue.ToString()
currentParameterValues.Add(parameterDiscreteValue)
Next
Dim parameterFieldDefinitions As ParameterFieldDefinitions = reportDocument.DataDefinition.ParameterFields
Dim parameterFieldDefinition As ParameterFieldDefinition = parameterFieldDefinitions(paramFieldName)
parameterFieldDefinition.ApplyCurrentValues(currentParameterValues)
End Sub
Private Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles MyBase.Load
If Not IsPostBack Then
publishReport(Convert.ToInt32(Session("applicantIdx")), Convert.ToInt32(Session("applicantIdx")))
End If
End Sub
Private Sub publishReport(ByVal idx As Integer, ByVal tbl_MST_Pemohon_idx As Integer)
Try
Dim reportPath As String = String.Empty
reportPath = Server.MapPath("_rptBorang.rpt")
myReport.Load(reportPath)
myReport.SetDatabaseLogon(rptUsr, rptPwd, rptSvrNme, rptDB)
Dim arrayList1 As New ArrayList()
arrayList1.Add(idx)
SetCurrentValuesForParameterField(myReport, arrayList1, PARAMETER_FIELD_NAME1)
Dim arrayList2 As New ArrayList()
arrayList2.Add(tbl_MST_Pemohon_idx)
SetCurrentValuesForParameterField(myReport, arrayList2, PARAMETER_FIELD_NAME2)
Dim arrayList3 As New ArrayList()
arrayList3.Add(tbl_MST_Pemohon_idx)
SetCurrentValuesForParameterField(myReport, arrayList3, PARAMETER_FIELD_NAME3)
Dim arrayList4 As New ArrayList()
arrayList4.Add(tbl_MST_Pemohon_idx)
SetCurrentValuesForParameterField(myReport, arrayList4, PARAMETER_FIELD_NAME4)
Dim arrayList5 As New ArrayList()
arrayList5.Add(tbl_MST_Pemohon_idx)
SetCurrentValuesForParameterField(myReport, arrayList5, PARAMETER_FIELD_NAME5)
Dim arrayList6 As New ArrayList()
arrayList6.Add(tbl_MST_Pemohon_idx)
SetCurrentValuesForParameterField(myReport, arrayList6, PARAMETER_FIELD_NAME6)
Dim arrayList7 As New ArrayList()
arrayList7.Add(tbl_MST_Pemohon_idx)
SetCurrentValuesForParameterField(myReport, arrayList7, PARAMETER_FIELD_NAME7)
Dim parameterFields As ParameterFields = CrystalReportViewer1.ParameterFieldInfo
CrystalReportViewer1.ReportSource = myReport
Catch ex As Exception
lblMsg.Text = ex.Message
End Try
End Sub
Protected Sub Page_Unload(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Unload
myReport.Close()
End Sub
End Class
The result was, my ASP.NET return error --- > Invalid index. (Exception from HRESULT: 0x8002000B (DISP_E_BADINDEX))
I'm stuck
Really need help
Edited by: WKM1925 on Feb 22, 2012 11:49 AMFirst off, it would really be nice to have the version of CR you are using. Then
1) does this report work the CR designer?
2) What CR SDK are you using?
3) If .NET, what version?
4) Web or Win app?
5) What OS?
Then, please re-read your post and see if it actually makes any sense. To me, it's just gibberish...
Ludek
Follow us on Twitter http://twitter.com/SAPCRNetSup
Got Enhancement ideas? Try the [SAP Idea Place|https://ideas.sap.com/community/products_and_solutions/crystalreports]
Maybe you are looking for
-
The gyro no longer works when using the remote app on my iPad. I updated to iOS 7.0.1. This seem to happen after the update. Any suggestions?
-
Been trying for anout 1 hour now and no joy. I am trying to transfer all my music and apps etc from my old laptop to my new and read home sharing would be the easiest. However I am switching home sharing on and it is not appearing on the list. Help w
-
What program to convert wmv to DVD video?
I thought I could use idvd but turns out it's only avi...So what's the best way to do this? Also how do you merge short clips together so they'll play right after the other instead of going to the menu every time and choosing what you want? Thanks
-
RE: (forte-users) XML Parser - Reading an XML document intothe Document
Hi Laks Here is an example of converting a XML doc to a DOM object. The method is ParseXML(pXMLString:TextData, pListType:CSAListInterface). Hope this helps. Argyris Simakis Corporate Services Applications ITB Sydney AUSTRALIA -----Original Message--
-
Hi.. i'm jalil.. I want to develop a system using studio.net 2005 that can use oracle,ms-sql,mysql and back-end data engine I Have already test the connection using ms-sql(MSDE) and my-sql it work fine now i want to test the same thing on oracle engi