Algorithm for Unix crypt function in PL*SQL
Does anyone have similar sample in PL*SQL that does what crypt
function is doing in Unix (C)?
man cryptReformatting page. Wait... done
User Commands crypt(1)
NAME
crypt - encode or decode a file
SYNOPSIS
crypt [ password ]
DESCRIPTION
crypt encrypts and decrypts the contents of a file. crypt
reads from the standard input and writes on the standard
output. The password is a key that selects a particular
transformation. If no password is given, crypt demands a
key from the terminal and turns off printing while the key
is being typed in. crypt encrypts and decrypts with the
same key:
Thanks,
Frantz
null
The easiest way to accomplish this, is to just write an external
PL/SQL wrapper function to directly call the crypt function.
I've been working on code that does this already, including the
generation of a random seed to generate the crypted password.
Tom Tyson
F.Sunjic (guest) wrote:
: Does anyone have similar sample in PL*SQL that does what crypt
: function is doing in Unix (C)?
: > man crypt
: Reformatting page. Wait... done
: User Commands crypt
(1)
: NAME
: crypt - encode or decode a file
: SYNOPSIS
: crypt [ password ]
: DESCRIPTION
: crypt encrypts and decrypts the contents of a file.
crypt
: reads from the standard input and writes on the
standard
: output. The password is a key that selects a
particular
: transformation. If no password is given, crypt
demands a
: key from the terminal and turns off printing while the
key
: is being typed in. crypt encrypts and decrypts with
the
: same key:
: Thanks,
: Frantz
null
Similar Messages
-
Is SHA-1 example shown on this web site better than UNIX crypt()
Hi,
For a project I am working on I have to store passwords in a database. I want to store this passwords encrypted and my first thought was to write/use an emulation of the UNIX() crypt function e.g. UnixCrypt. However I am concerned that UNIX passwords are easy to brute force unless you have a very strict password policy....
However I saw an example on this website in response to a similar question and it was suggested they use SHA-1 to generate a one-way hash and Base64 to encode the result before storing in the database. If I force people to choose passwords longer than the traditional 8 chars allowed by UNIX and use SHA-1 to generate the hash will it be harder to discover the passwords using brute force that it is for the standard UNIX passwords ?
regards,
Jeff.Only if the passwords chosen are secure. If someone chooses the word 'password', it won't matter which hashing algorithm is used (and Base64 is an encoding scheme, which doesn't add any security). Regardless of how passwords are encrypted, they should always be protected as if they are plaintext. A very strict password policy, along with regular attempts to crack your own password databases would also be wise.
More simply, SHA-1 is considered to be a far more secure hashing algorithm than crypt(), and because Java has built in support for SHA-1 via the java.security.MessageDigest class you'll also save yourself some work.
Good luck,
-Derek -
How to write procedure or function or any sql statement for requirement
Hi,
i have a table named letter,it contains 2 columns named as letter_id and letter_content.
select * from letter;
letter_id letter_content
103 Dear MFR
103 This is in regards to the attached DM List
103 Please Credit us after reviewing it.
103 Thanks
103 Regards
103 xxxx
108 Dear customer
108 This is to inform that ur DM List is as follows
108 Credit us according to the Dm after reviewing it.
108 Thanks
108 Regards
108 xxxx
now my requirement is,i need send a parameter as letter_id to a procedure or function in pl/sql in oracle,the output should be as follows:
if we will pass parameter(letter_id) = 103 then it displays as follows:
Dear MFR
This is in regards to the attached DM List.Please Credit us after reviewing it.
Thanks
Regards,
XXXXX.
if we will pass parameter(letter_id) = 108 then it should displays as follows:
Dear customer,
This is to inform that ur DM List is as follows. Credit us according to the Dm after reviewing it.
Thanks
Regards,
XXXXX.
---------------------------------------------------------------------------------------------------------i am really appriciate ur help.
thank u so much for ur suggestions.
when i am uning line_seq its giving an error
ORA-00904: "LINE_SEQ": invalid identifier
so,in my table i had created a sequense named content_seq.
select * from letter;
content_seq letter_id letter_content
1 103 Dear MFR
2 103 This is in regards to the attached DM List
3 103 Please Credit us after reviewing it.
4 103 Thanks
5 103 Regards
6 103 xxxx
7 108 Dear customer
8 108 This is to inform that ur DM List is as follows
9 108 Credit us according to the Dm after reviewing it.
10 108 Thanks
11 108 Regards
12 108 xxxx
then i had used ur code as follows:
select linefrom ( select content_seq , letter_content || case content_seq when 2 then ' ' || lead(letter_content) over (partition by letter_id order by content_seq) end as line from letter where letter_id = 103 )where content_seq <> 3;
LETTER_CONTENT
Dear MFR
this is in regards to the attached DM List Please credit us after reviewing it
thanks
Regards
EXP
but when i take letter_id = 108 the format is comming different.it was unable to combine 2lines.
Message was edited by:
user579585
Message was edited by:
user579585 -
How to prepare for Converting UNIX shell scripts to PL/SQL
Hi All
I was said, that i may have to convert a lot of unix shell script to PL/SQL, what are the concepts i need to know to do it efficently,
what are the options PL/SQL is having to best do that.
I know the question is little unclear, but I too dont have much inputs about that i'm sorry for that, just its a question of how
to prepare myself to do it the best way. What are the concepts i have to be familiar with.
Many Thanks
MJJust how much work is involved, is hard to say. Many years ago I also wrote (more than once) a complete ETL system using a combination of shell scripts, SQL*Plus and PL/SQL.
If the PL/SQL code is fairly clean, uses bind variables and not substitution variables, then it should be relatively easy to convert that PL/SQL code in the script to a formal stored procedure in the database.
There is however bits and pieces that will be difficult to move into the PL/SQL layer as it requires new software - like for example FTP'ing a file from the production server to the ETL server. This can be done using external o/s calls from within PL/SQL. Or, you can install a FTP API library in PL/SQL and FTP that file directly into a CLOB and parse and process the CLOB.
Think of Oracle as an o/s in its own right. In Oracle we have a mail client, a web browser, IPC methods like pipes and messages queues, cron, file systems, web servers and services, etc. And PL/SQL is the "shell scripting" (times a thousand) language of this Oracle o/s .
In some cases you will find it fairly easy to map a Unix o/s feature or command to one in Oracle. For example, a Unix wget to fetch a HTML CSV file can easily be replaced in Oracle using a UTL_HTTP call.
On the other hand, techniques used in Unix like creating a pipe to process data, grep for certain stuff and awk certain tokens for sed to process further... in Oracle this will look and work a lot different and use SQL. -
Hallo,
white someone whether it for Unix Server a JDBC drivers for MS SQL gives?
Thanks yves
Message was edited by: Kraus YvesHi Kraus,
Yes JDBC Driver is avalable for MS SQL to be run on Unix server.
http://www.microsoft.com/downloads/details.aspx?FamilyID=4F8F2F01-1ED7-4C4D-8F7B-3D47969E66AE&displaylang=en
cheers,
Naveen
Message was edited by: Naveen Pandrangi -
SQL Server's FOR XML EXPLICIT functionality in Oracle
What could be the best way to implement SQL Server's FOR XML EXPLICIT functionality? Can someone please give the overview?
Probably you can try General XML forum General XML
Gints Plivna
http://www.gplivna.eu -
Routing Functionality in MS SQL SERVER 2012
I am using postgreSQL for one of my project which use pgrouting functionality to access routing,
but i like to port the data base to MS SQL SERVER 2012,Data got ported successfully but still am facing a problem is Routing
is there any way to use ROUTING in MS SQL Server 2012?@Tracy Cai
i Gone through the book Pro Spacial and i found A* algorithm for routing which is given below but it producing so many errors,really am new to SQL Server, how i need to run the below code
[Microsoft.SqlServer.Server.SqlProcedure]
public static void GeographyAStar(SqlInt32 StartID, SqlInt32 GoalID)
* INITIALISATION
// The "Open List" contains the nodes that have yet to be assessed
List<AStarNode> OpenList = new List<AStarNode>();
// The "Closed List" contains the nodes that have already been assessed
// Implemented as a Dictionary<> to enable quick lookup of nodes
Dictionary<int, AStarNode> ClosedList = new Dictionary<int, AStarNode>();
using (SqlConnection conn = new SqlConnection("context connection=true;"))
conn.Open();
// Retrieve the location of the StartID
using (SqlCommand cmdGetStartNode = new SqlCommand("SELECT geog4326 FROM Nodes WHERE
NodeID = @id", conn))
SqlParameter param = new SqlParameter("@id", SqlDbType.Int);
param.Value = StartID;
cmdGetStartNode.Parameters.Add(param);
object startNode = cmdGetStartNode.ExecuteScalar();
if (startNode != null)
startGeom = (SqlGeography)(startNode);
else
throw new Exception("Couldn't find start node with ID " + StartID.ToString());
// Retrieve the location of the GoalID;
using (SqlCommand cmdGetEndNode = new SqlCommand("SELECT geog4326 FROM Nodes WHERE
NodeID = @id", conn))
SqlParameter endparam = new SqlParameter("@id", SqlDbType.Int);
endparam.Value = GoalID;
cmdGetEndNode.Parameters.Add(endparam);
object endNode = cmdGetEndNode.ExecuteScalar();
if (endNode != null)
endGeom = (SqlGeography)(endNode);
else
throw new Exception("Couldn't find end node with ID " + GoalID.ToString());
conn.Close();
// To start with, the only point we know about is the start node
AStarNode StartNode = new AStarNode(
(int)StartID, // ID of this node
-1, // Start node has no parent
0, // g - the distance travelled so far to get to this node
(double)startGeom.STDistance(endGeom) // h - estimated remaining distance to the goal
// Add the start node to the open list
OpenList.Add(StartNode);
* TRAVERSAL THROUGH THE NETWORK
// So long as there are open nodes to assess
while (OpenList.Count > 0)
// Sort the list of open nodes by ascending f score
OpenList.Sort(delegate(AStarNode p1, AStarNode p2)
{ return p1.f.CompareTo(p2.f); });
// Consider the open node with lowest f score
AStarNode NodeCurrent = OpenList[0];
* GOAL FOUND
if (NodeCurrent.NodeID == GoalID)
// Reconstruct the route to get here
List<SqlGeography> route = new List<SqlGeography>();
int parentID = NodeCurrent.ParentID;
// Keep looking back through nodes until we get to the start (parent -1)
while (parentID != -1)
conn.Open();
using (SqlCommand cmdSelectEdge = new SqlCommand("GetEdgeBetweenNodes", conn))
// Retrieve the edge from this node to its parent
cmdSelectEdge.CommandType = CommandType.StoredProcedure;
SqlParameter fromOSODRparam = new SqlParameter("@NodeID1", SqlDbType.Int);
SqlParameter toOSODRparam = new SqlParameter("@NodeID2", SqlDbType.Int);
fromOSODRparam.Value = NodeCurrent.ParentID;
toOSODRparam.Value = NodeCurrent.NodeID;
cmdSelectEdge.Parameters.Add(fromOSODRparam);
cmdSelectEdge.Parameters.Add(toOSODRparam);
object edge = cmdSelectEdge.ExecuteScalar();
SqlGeography edgeGeom;
if (edge != null)
edgeGeom = (SqlGeography)(edge);
route.Add(edgeGeom);
conn.Close();
NodeCurrent = ClosedList[parentID];
parentID = NodeCurrent.ParentID;
// Send the results back to the client
SqlMetaData ResultMetaData = new SqlMetaData(
"Route", SqlDbType.Udt, typeof(SqlGeography)
SqlDataRecord Record = new SqlDataRecord(ResultMetaData);
SqlContext.Pipe.SendResultsStart(Record);
// Loop through route segments in reverse order
for (int k = route.Count - 1; k >= 0; k--)
Record.SetValue(0, route[k]);
SqlContext.Pipe.SendResultsRow(Record);
SqlContext.Pipe.SendResultsEnd();
return;
} // End if (NodeCurrent.NodeID == GoalID)
* GOAL NOT YET FOUND - IDENTIFY ALL NODES ACCESSIBLE FROM CURRENT NODE
List<AStarNode> Successors = new List<AStarNode>();
conn.Open();
using (SqlCommand cmdSelectSuccessors = new SqlCommand("GetNodesAccessibleFromNode",
conn))
// Identify all nodes accessible from the current node
cmdSelectSuccessors.CommandType = CommandType.StoredProcedure;
SqlParameter CurrentNodeOSODRparam = new SqlParameter("@NodeID", SqlDbType.Int);
CurrentNodeOSODRparam.Value = NodeCurrent.NodeID;
cmdSelectSuccessors.Parameters.Add(CurrentNodeOSODRparam);
using (SqlDataReader dr = cmdSelectSuccessors.ExecuteReader())
while (dr.Read())
// Create a node for this potential successor
AStarNode SuccessorNode = new AStarNode(
dr.GetInt32(0), // NodeID
NodeCurrent.NodeID, // Successor node is a child of the current node
NodeCurrent.g + dr.GetDouble(1), // Distance from current node to successor
(double)(((SqlGeography)dr.GetValue(2)).STDistance(endGeom))
// Add the end of the list of successors
Successors.Add(SuccessorNode);
conn.Close();
* Examine list of possible nodes to go next
foreach (AStarNode NodeSuccessor in Successors)
// Keep track of whether we have already found this node
bool found = false;
// If this node is already on the closed list, don't examine further
if (ClosedList.ContainsKey(NodeSuccessor.NodeID))
found = true;
// If we didn't find the node on the closed list, look for it on the open list
if (!found)
for (int j = 0; j < OpenList.Count; j++)
if (OpenList[j].NodeID == NodeSuccessor.NodeID)
found = true;
// If this is a cheaper way to get there
if (OpenList[j].h > NodeSuccessor.h)
// Update the route on the open list
OpenList[j] = NodeSuccessor;
break;
// If not on either list, add to the open list
if (!found)
OpenList.Add(NodeSuccessor);
// Once all successors have been examined, weve finished with the current node
// so move it to the closed list
OpenList.Remove(NodeCurrent);
ClosedList.Add(NodeCurrent.NodeID, NodeCurrent);
} // end while (OpenList.Count > 0)
SqlContext.Pipe.Send("No route could be found!");
return; -
A replacement for the Quicksort function in the C++ library
Hi every one,
I'd like to introduce and share a new Triple State Quicksort algorithm which was the result of my research in sorting algorithms during the last few years. The new algorithm reduces the number of swaps to about two thirds (2/3) of classical Quicksort. A multitude
of other improvements are implemented. Test results against the std::sort() function shows an average of 43% improvement in speed throughout various input array types. It does this by trading space for performance at the price of n/2 temporary extra spaces.
The extra space is allocated automatically and efficiently in a way that reduces memory fragmentation and optimizes performance.
Triple State Algorithm
The classical way of doing Quicksort is as follows:
- Choose one element p. Called pivot. Try to make it close to the median.
- Divide the array into two parts. A lower (left) part that is all less than p. And a higher (right) part that is all greater than p.
- Recursively sort the left and right parts using the same method above.
- Stop recursion when a part reaches a size that can be trivially sorted.
The difference between the various implementations is in how they choose the pivot p, and where equal elements to the pivot are placed. There are several schemes as follows:
[ <=p | ? | >=p ]
[ <p | >=p | ? ]
[ <=p | =p | ? | >p ]
[ =p | <p | ? | >p ] Then swap = part to middle at the end
[ =p | <p | ? | >p | =p ] Then swap = parts to middle at the end
Where the goal (or the ideal goal) of the above schemes (at the end of a recursive stage) is to reach the following:
[ <p | =p | >p ]
The above would allow exclusion of the =p part from further recursive calls thus reducing the number of comparisons. However, there is a difficulty in reaching the above scheme with minimal swaps. All previous implementation of Quicksort could not immediately
put =p elements in the middle using minimal swaps, first because p might not be in the perfect middle (i.e. median), second because we don’t know how many elements are in the =p part until we finish the current recursive stage.
The new Triple State method first enters a monitoring state 1 while comparing and swapping. Elements equal to p are immediately copied to the middle if they are not already there, following this scheme:
[ <p | ? | =p | ? | >p ]
Then when either the left (<p) part or the right (>p) part meet the middle (=p) part, the algorithm will jump to one of two specialized states. One state handles the case for a relatively small =p part. And the other state handles the case for a relatively
large =p part. This method adapts to the nature of the input array better than the ordinary classical Quicksort.
Further reducing number of swaps
A typical quicksort loop scans from left, then scans from right. Then swaps. As follows:
while (l<=r)
while (ar[l]<p)
l++;
while (ar[r]>p)
r--;
if (l<r)
{ Swap(ar[l],ar[r]);
l++; r--;
else if (l==r)
{ l++; r--; break;
The Swap macro above does three copy operations:
Temp=ar[l]; ar[l]=ar[r]; ar[r]=temp;
There exists another method that will almost eliminate the need for that third temporary variable copy operation. By copying only the first ar[r] that is less than or equal to p, to the temp variable, we create an empty space in the array. Then we proceed scanning
from left to find the first ar[l] that is greater than or equal to p. Then copy ar[r]=ar[l]. Now the empty space is at ar[l]. We scan from right again then copy ar[l]=ar[r] and continue as such. As long as the temp variable hasn’t been copied back to the array,
the empty space will remain there juggling left and right. The following code snippet explains.
// Pre-scan from the right
while (ar[r]>p)
r--;
temp = ar[r];
// Main loop
while (l<r)
while (l<r && ar[l]<p)
l++;
if (l<r) ar[r--] = ar[l];
while (l<r && ar[r]>p)
r--;
if (l<r) ar[l++] = ar[r];
// After loop finishes, copy temp to left side
ar[r] = temp; l++;
if (temp==p) r--;
(For simplicity, the code above does not handle equal values efficiently. Refer to the complete code for the elaborate version).
This method is not new, a similar method has been used before (read: http://www.azillionmonkeys.com/qed/sort.html)
However it has a negative side effect on some common cases like nearly sorted or nearly reversed arrays causing undesirable shifting that renders it less efficient in those cases. However, when used with the Triple State algorithm combined with further common
cases handling, it eventually proves more efficient than the classical swapping approach.
Run time tests
Here are some test results, done on an i5 2.9Ghz with 6Gb of RAM. Sorting a random array of integers. Each test is repeated 5000 times. Times shown in milliseconds.
size std::sort() Triple State QuickSort
5000 2039 1609
6000 2412 1900
7000 2733 2220
8000 2993 2484
9000 3361 2778
10000 3591 3093
It gets even faster when used with other types of input or when the size of each element is large. The following test is done for random large arrays of up to 1000000 elements where each element size is 56 bytes. Test is repeated 25 times.
size std::sort() Triple State QuickSort
100000 1607 424
200000 3165 845
300000 4534 1287
400000 6461 1700
500000 7668 2123
600000 9794 2548
700000 10745 3001
800000 12343 3425
900000 13790 3865
1000000 15663 4348
Further extensive tests has been done following Jon Bentley’s framework of tests for the following input array types:
sawtooth: ar[i] = i % arange
random: ar[i] = GenRand() % arange + 1
stagger: ar[i] = (i* arange + i) % n
plateau: ar[i] = min(i, arange)
shuffle: ar[i] = rand()%arange? (j+=2): (k+=2)
I also add the following two input types, just to add a little torture:
Hill: ar[i] = min(i<(size>>1)? i:size-i,arange);
Organ Pipes: (see full code for details)
Where each case above is sorted then reordered in 6 deferent ways then sorted again after each reorder as follows:
Sorted, reversed, front half reversed, back half reversed, dithered, fort.
Note: GenRand() above is a certified random number generator based on Park-Miller method. This is to avoid any non-uniform behavior in C++ rand().
The complete test results can be found here:
http://solostuff.net/tsqsort/Tests_Percentage_Improvement_VC++.xls
or:
https://docs.google.com/spreadsheets/d/1wxNOAcuWT8CgFfaZzvjoX8x_WpusYQAlg0bXGWlLbzk/edit?usp=sharing
Theoretical Analysis
A Classical Quicksort algorithm performs less than 2n*ln(n) comparisons on the average (check JACEK CICHON’s paper) and less than 0.333n*ln(n) swaps on the average (check Wild and Nebel’s paper). Triple state will perform about the same number of comparisons
but with less swaps of about 0.222n*ln(n) in theory. In practice however, Triple State Quicksort will perform even less comparisons in large arrays because of a new 5 stage pivot selection algorithm that is used. Here is the detailed theoretical analysis:
http://solostuff.net/tsqsort/Asymptotic_analysis_of_Triple_State_Quicksort.pdf
Using SSE2 instruction set
SSE2 uses the 128bit sized XMM registers that can do memory copy operations in parallel since there are 8 registers of them. SSE2 is primarily used in speeding up copying large memory blocks in real-time graphics demanding applications.
In order to use SSE2, copied memory blocks have to be 16byte aligned. Triple State Quicksort will automatically detect if element size and the array starting address are 16byte aligned and if so, will switch to using SSE2 instructions for extra speedup. This
decision is made only once when the function is called so it has minor overhead.
Few other notes
- The standard C++ sorting function in almost all platforms religiously takes a “call back pointer” to a comparison function that the user/programmer provides. This is obviously for flexibility and to allow closed sourced libraries. Triple State
defaults to using a call back function. However, call back functions have bad overhead when called millions of times. Using inline/operator or macro based comparisons will greatly improve performance. An improvement of about 30% to 40% can be expected. Thus,
I seriously advise against using a call back function when ever possible. You can disable the call back function in my code by #undefining CALL_BACK precompiler directive.
- Like most other efficient implementations, Triple State switches to insertion sort for tiny arrays, whenever the size of a sub-part of the array is less than TINY_THRESH directive. This threshold is empirically chosen. I set it to 15. Increasing this
threshold will improve the speed when sorting nearly sorted and reversed arrays, or arrays that are concatenations of both cases (which are common). But will slow down sorting random or other types of arrays. To remedy this, I provide a dual threshold method
that can be enabled by #defining DUAL_THRESH directive. Once enabled, another threshold TINY_THRESH2 will be used which should be set lower than TINY_THRESH. I set it to 9. The algorithm is able to “guess” if the array or sub part of the array is already sorted
or reversed, and if so will use TINY_THRESH as it’s threshold, otherwise it will use the smaller threshold TINY_THRESH2. Notice that the “guessing” here is NOT fool proof, it can miss. So set both thresholds wisely.
- You can #define the RANDOM_SAMPLES precompiler directive to add randomness to the pivoting system to lower the chances of the worst case happening at a minor performance hit.
-When element size is very large (320 bytes or more). The function/algorithm uses a new “late swapping” method. This will auto create an internal array of pointers, sort the pointers array, then swap the original array elements to sorted order using minimal
swaps for a maximum of n/2 swaps. You can change the 320 bytes threshold with the LATE_SWAP_THRESH directive.
- The function provided here is optimized to the bone for performance. It is one monolithic piece of complex code that is ugly, and almost unreadable. Sorry about that, but inorder to achieve improved speed, I had to ignore common and good coding standards
a little. I don’t advise anyone to code like this, and I my self don’t. This is really a special case for sorting only. So please don’t trip if you see weird code, most of it have a good reason.
Finally, I would like to present the new function to Microsoft and the community for further investigation and possibly, inclusion in VC++ or any C++ library as a replacement for the sorting function.
You can find the complete VC++ project/code along with a minimal test program here:
http://solostuff.net/tsqsort/
Important: To fairly compare two sorting functions, both should either use or NOT use a call back function. If one uses and another doesn’t, then you will get unfair results, the one that doesn’t use a call back function will most likely win no matter how bad
it is!!
Ammar MuqaddasThanks for your interest.
Excuse my ignorance as I'm not sure what you meant by "1 of 5" optimization. Did you mean median of 5 ?
Regarding swapping pointers, yes it is common sense and rather common among programmers to swap pointers instead of swapping large data types, at the small price of indirect access to the actual data through the pointers.
However, there is a rather unobvious and quite terrible side effect of using this trick. After the pointer array is sorted, sequential (sorted) access to the actual data throughout the remaining of the program will suffer heavily because of cache misses.
Memory is being accessed randomly because the pointers still point to the unsorted data causing many many cache misses, which will render the program itself slow, although the sort was fast!!.
Multi-threaded qsort is a good idea in principle and easy to implement obviously because qsort itself is recursive. The thing is Multi-threaded qsort is actually just stealing CPU time from other cores that might be busy running other apps, this might slow
down other apps, which might not be ideal for servers. The thing researchers usually try to do is to do the improvement in the algorithm it self.
I Will try to look at your sorting code, lets see if I can compile it. -
I'm attempting to dynamically generate a rather large SQL query via the "PL/SQL function body returning SQL query" report region option. The SQL query generated will possibly be over 32K. When I execute my page, I sometimes receive the "ORA-06502: PL/SQL: numeric or value error" which points to a larger than 32K query that was generated. I've seen other posts in the forum related to this dynamic SQL size limitation issue, but they are older (pre-2010) and point to the 32K limit of the DNS (EXECUTE IMMEDIATE) and DBMS_SQL. I found this post (dynamic sql enhancements in 11g) which discusses 11g no longer having the 32K size limitation for generating dynamic SQL. Our environment is on 11gR2 and using ApEx 4.2.1. I do not know which dynamic SQL method -- DNS or DBMS_SQL -- ApEx 4.2.1 is using. Can someone clarify for me which dynamic SQL method ApEx uses to implement the "PL/SQL function body returning SQL query" option?
As a test, I created a page on apex.oracle.com with a report region with the following source:
declare
l_stub varchar2(25) := 'select * from sys.dual ';
l_sql clob := l_stub || 'union all ';
br number(3) := 33;
begin
while length ( l_sql ) < 34000 loop
l_sql := l_sql || l_stub || 'union all ';
end loop;
l_sql := l_sql || l_stub;
for i in 1 .. ceil ( length ( l_sql ) / br ) loop
dbms_output.put_line ( dbms_lob.substr ( l_sql, br, ( ( i - 1 ) * br ) + 1 ) );
end loop;
return l_sql;
end;
The dbms_output section is there to be able to run this code in SQL*Plus and confirm the size of the SQL is indeed larger than 32K. When running this in SQL*Plus, the procedure is successful and produces a proper SQL statement which can be executed. When I put this into the report region on apex.oracle.com, I get the ORA-06502 error.
I can certainly implement a work-around for my issue by creating a 'Before Header' process on the page which populates an ApEx collection with the data I am returning and then the report can simply select from the collection, but according to documentation, the above 32K limitation should be resolved in 11g. Thoughts?
Shane.What setting do you use in your report properties - especially in Type and in Region Source?
If you have Type="SQL Query", then you should have a SELECT statement in the Region Source. Something like: SELECT .... FROM ... WHERE
According to the ERR-1101 error message, you have probably set Type to "SQL Query (PL/SQL function body returning SQL query)". In this situation APEX expects you to write a body of a PL/SQL function, that will generate the text of a SQL query that APEX should run. So it can be something like:
declare
mycond varchar2(4000);
begin
if :P1_REPORT_SEARCH is not null THEN
mycond:='WHERE LAST_NAME like :P1_REPORT_SEARCH ||''%''';
end if;
return 'select EMPLOYEE_ID, FIRST_NAME, LAST_NAME from EMPLOYEES ' ||mycond;
end;
And for escaping - are you interested in escaping the LIKE wildcards, or the quotes?
For escaping the wildcards in LIKE function so that when the user enters % you will find a record with % and not all functions, look into the SQL Reference:
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/conditions007.htm
(You would than need to change the code of your function accordingly).
If you are interested in escaping the quotes, try to avoid concatenating the values entered by the user into the SQL. If you can, use bind variables instead - as I have in my example above. If you start concatenating the values into the text of SQL, you are open to SQLInjection - user can enter anything, even things that will break your SQL. If you really need to allow users to choose the operator, I would probably give them a separate combo for operators and a textfield for values, than you could check if the operator is one of the allowed ones and create the condition accordingly - and than still use bind variable for inserting the filtering value into the query. -
Error in report when executing pl/sql function body returning sql query.
Hi,
I have used the pl/sql function body returning sql query for creating a report. I have created a datepicker(
P10_TASK_DATE) which can be submitted.The code is as below
DECLARE
v_sql varchar2(3000);
BEGIN
if :P10_TASK_DATE is not null THEN
v_sql:='select
* from tasks';
return v_sql;
else
v_sql:='select * from discovery';
return v_sql;
END IF;
END;if the date field is empty "select * from discovery" is executed and report is getting generated. But when we give a
date using date picker the page is submitted and i get "report error: ORA-01403: no data found" even
though the "tasks" table has data in it. Plz help
Thanks,
TJhi
Please try this
1. Create 2 region
1st region source=
select * from tasks'
go to the tab -> condition =
item NOT NULL
EXpression1 =:P10_TASK_DATE
this will run whenever the item have any date
2. open your 2 nd region source code= select * from discovery
put the condition
item is NULL
EXpression1 =:P10_TASK_DATE
thanks
Mark Wyatt -
Timing of Report (function body returning sql) w/ pagination
Hey -
Wondered if someone can provide some insight here...
I have a report that is dynamically created by calling a function that returns sql. Since it may return a few hundred results I have pagination turned on allowing the user to choose rows per page, and am showing X-Y with next/prev links.
Before displaying the report the user has to choose some criteria to narrow down the result set - I'm finding something that I think is a bit strange in that it takes more time for the page to render when zero results are found vs. 100's of results. If I run the function and take the sql it creates, I can return 0 rows in .2 seconds and 508 rows in .5 seconds so I think the sql itself is fine. Other than debugging timings, how can I find out what is taking so long?
Debug for 0 results:
0.07: Region: Program Forecast (viewing SAVED values)
5.14: show report
5.15: determine column headings
5.15: parse query as: ####
5.15: print column headings
5.15: rows loop: 15 row(s)
10.32: Computation point: AFTER_BOX_BODY
Debug for 508 results (in chunks of 15):
0.07: Region: Program Forecast (viewing SAVED values)
2.76: show report
2.76: determine column headings
2.77: parse query as: ####
2.94: print column headings
2.94: rows loop: 15 row(s)
6.52: Computation point: AFTER_BOX_BODY
So it took only 6.5 seconds to pull 508 rows, look at my pagination and pull the first x rows, but 10.32 seconds to pull 0 rows and show me the no data found message. Even stranger is that in the 0 rows result it looped and took 5 seconds before it hit the next point (5 seconds doing what??)
Any ideas? I'm a little baffled here...I guess the next step is to trace it but I wanted to see if anyone had any ideas in the interim.Where is the embarrassed icon?
After painstakingly copying over each item to a new page to test, I figured out performance starting sucking big time once I put a button out there that apparently had a less than optimal exists clause in it.
D'oh! -
ORA-12899 error from function invoked from SQL*Loader
I am getting the above error when I call a function from my SQL*Loader script, and I am not seeing what the problem is. As far as I can see, there should be no problem with the field lengths, unless the length of the automatic variable within my function is somehow being set at 30? Here are the details (in the SQL*Loader script, the field of interest is the last one):
====
Error:
====
Record 1: Rejected - Error on table TESTM8.LET_DRIVE_IN_FCLTY, column DIF_CSA_ID.
ORA-12899: value too large for column "TESTM8"."LET_DRIVE_IN_FCLTY"."DIF_CSA_ID" (actual: 30, maximum: 16)
=======
Function:
=======
CREATE OR REPLACE FUNCTION find_MCO_id (di_oid_in DECIMAL)
RETURN CHAR IS mco_id CHAR;
BEGIN
SELECT AOL_MCO_LOC_CD INTO mco_id
FROM CONV_DI_FLCTY
WHERE DIF_INST_ELMNT_OID = di_oid_in;
RETURN TRIM(mco_id);
END;
==============
SQL*Loader Script:
==============
LOAD DATA
INFILE 'LET_DRIVE_IN_FCLTY.TXT'
BADFILE 'LOGS\LET_DRIVE_IN_FCLTY_BADDATA.TXT'
DISCARDFILE 'LOGS\LET_DRIVE_IN_FCLTY_DISCARDDATA.TXT'
REPLACE
INTO TABLE TESTM8.LET_DRIVE_IN_FCLTY
FIELDS TERMINATED BY '~' OPTIONALLY ENCLOSED BY '"'
DIF_DRIVE_IN_OID DECIMAL EXTERNAL,
DIF_FCLTY_TYPE_OID DECIMAL EXTERNAL NULLIF DIF_FCLTY_TYPE_OID = 'NULL',
DIF_INST_ELMNT_OID DECIMAL EXTERNAL,
DIF_PRI_PERSON_OID DECIMAL EXTERNAL NULLIF DIF_PRI_PERSON_OID = 'NULL',
DIF_SEC_PERSON_OID DECIMAL EXTERNAL NULLIF DIF_SEC_PERSON_OID = 'NULL',
DIF_CREATE_TS TIMESTAMP "yyyy-mm-dd-hh24.mi.ss.ff6",
DIF_LAST_UPDATE_TS TIMESTAMP "yyyy-mm-dd-hh24.mi.ss.ff6",
DIF_ADP_ID CHAR NULLIF DIF_ADP_ID = 'NULL',
DIF_CAT_CLAIMS_IND CHAR,
DIF_CAT_DIF_IND CHAR,
DIF_DAYLT_SAVE_IND CHAR,
DIF_OPEN_PT_TM_IND CHAR,
DIF_CSA_ID CONSTANT "find_MCO_id(:DIF_DRIVE_IN_OID)"
============
Table Definitions:
============
SQL> describe CONV_DI_FLCTY;
Name Null? Type
DIF_INST_ELMNT_OID NOT NULL NUMBER(18)
AOL_MCO_LOC_CD NOT NULL VARCHAR2(3)
SQL> describe LET_DRIVE_IN_FCLTY;
Name Null? Type
DIF_DRIVE_IN_OID NOT NULL NUMBER(18)
DIF_INST_ELMNT_OID NOT NULL NUMBER(18)
DIF_FCLTY_TYPE_OID NUMBER(18)
DIF_ADP_ID VARCHAR2(10)
DIF_CAT_DIF_IND NOT NULL VARCHAR2(1)
DIF_CAT_CLAIMS_IND NOT NULL VARCHAR2(1)
DIF_CSA_ID VARCHAR2(16)
DIF_DAYLT_SAVE_IND NOT NULL VARCHAR2(1)
DIF_ORG_ENTY_ID VARCHAR2(16)
DIF_OPEN_PT_TM_IND NOT NULL VARCHAR2(1)
DIF_CREATE_TS NOT NULL DATE
DIF_LAST_UPDATE_TS NOT NULL DATE
DIF_ITM_FCL_MKT_ID NUMBER(18)
DIF_PRI_PERSON_OID NUMBER(18)
DIF_SEC_PERSON_OID NUMBER(18)
=========================
Thanks for any help with this one!I changed one line of the function to:
RETURN CHAR IS mco_id VARCHAR2(16);
But I still get the same error:
ORA-12899: value too large for column "TESTM8"."LET_DRIVE_IN_FCLTY"."DIF_CSA_ID" (actual: 30, maximum: 16)
I just am not seeing what is being defined as 30 characters. Any ideas much appreciated! -
Unable to edit some functions in APEX Sql Workshop
Hi
Users are able to edit some procedures/functions in APEX SQL Work shop. ( Object Browser - functions - EDIT)
When we press edit we get cursor in the code area and can edit some procedures, But for some procedures when we click edit we don't get cursor in the code area and we are not able to edit the functions/procedures.
I am using fairfox browser.This is happening with only some. Is there any security.grants issue???
Thanks
SreeHi
This is happening with some procedures, For others this works fine.In IE I get red block in code area.
EDIT is working for some procedures so I think may not be the browser issue.
Thanks
Sree -
Hello!
Please help me in Migrating Database(Including Data,Stored
Procdures, Views, Indexes,Forms, Triggers etc.) from Oracle 8.0.5 for NT to Oracle 8.1.7 for Unix.Check the installation option that you chose.
Not all of the options will install a database.
P.S.
I have a similar problem on NT.
Where I am able to access the database after the 8i Enterprise
Install. I can not access the database from a Developer or
Designer Installation. I am trying to run these tools locally.
How do I configure NET8 to access the local 8i Database?
Any ideas??
Regards, Jim
Emeka (guest) wrote:
: I just installed the Oracle 8 Enterprise Edition for windows NT
: and i can't find the Oracle database to support the tables.
Also
: the username and password of scott and tiger didn't work for
the
: SQL Plus.ERROR' ORA-12203: TNS: unable to connect to
destination'
: was the message when i try getting into SQL. Could someone
please
: tell me how to install the database and how to get the user
name
: and password for the SQL Plus.
null -
How to create a function with dynamic sql or any better way to achieve this?
Hello,
I have created below SQL query which works fine however when scalar function created ,it
throws an error "Only functions and extended stored procedures can be executed from within a
function.". In below code First cursor reads all client database names and second cursor
reads client locations.
DECLARE @clientLocation nvarchar(100),@locationClientPath nvarchar(Max);
DECLARE @ItemID int;
SET @locationClientPath = char(0);
SET @ItemID = 67480;
--building dynamic sql to replace database name at runtime
DECLARE @strSQL nvarchar(Max);
DECLARE @DatabaseName nvarchar(100);
DECLARE @localClientPath nvarchar(MAX) ;
Declare databaselist_cursor Cursor for select [DBName] from [DataBase].[dbo].
[tblOrganization]
OPEN databaselist_cursor
FETCH NEXT FROM databaselist_cursor INTO @DatabaseName
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'Processing DATABASE: ' + @DatabaseName;
SET @strSQL = 'DECLARE organizationlist_cursor CURSOR
FOR SELECT '+ @DatabaseName +'.[dbo].[usGetLocationPathByRID]
([LocationRID])
FROM '+ @DatabaseName +'.[dbo].[tblItemLocationDetailOrg] where
ItemId = '+ cast(@ItemID as nvarchar(20)) ;
EXEC sp_executesql @strSQL;
-- Open the cursor
OPEN organizationlist_cursor
SET @localClientPath = '';
-- go through each Location path and return the
FETCH NEXT FROM organizationlist_cursor into @clientLocation
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @localClientPath = @clientLocation;
SELECT @locationClientPath =
@locationClientPath + @clientLocation + ','
FETCH NEXT FROM organizationlist_cursor INTO
@clientLocation
END
PRINT 'current databse client location'+ @localClientPath;
-- Close the Cursor
CLOSE organizationlist_cursor;
DEALLOCATE organizationlist_cursor;
FETCH NEXT FROM databaselist_cursor INTO @DatabaseName
END
CLOSE databaselist_cursor;
DEALLOCATE databaselist_cursor;
-- Trim the last comma from the string
SELECT @locationClientPath = SUBSTRING(@locationClientPath,1,LEN(@locationClientPath)- 1);
PRINT @locationClientPath;
I would like to create above query in function so that return value would be used in
another query select statement and I am using SQL 2005.
I would like to know if there is a way to make this work as a function or any better way
to achieve this?
Thanks,This very simple: We cannot use dynamic SQL from used-defined functions written in T-SQL. This is because you are not permitted do anything in a UDF that could change the database state (as the UDF may be invoked as part of a query). Since you can
do anything from dynamic SQL, including updates, it is obvious why dynamic SQL is not permitted as per the microsoft..
In SQL 2005 and later, we could implement your function as a CLR function. Recall that all data access from the CLR is dynamic SQL. (here you are safe-guarded, so that if you perform an update operation from your function, you will get caught.) A word of warning
though: data access from scalar UDFs can often give performance problems and its not recommended too..
Raju Rasagounder Sr MSSQL DBA
Hi Raju,
Can you help me writing CLR for my above function? I am newbie to SQL CLR programming.
Thanks in advance!
Satya
Maybe you are looking for
-
Display Settings for Gaming on MBP
I'm playing Half Life 2...it auto sets everything, but it defaults into 4:3 mode, should I not be in widescreen mode? It has options for 16x9 and 16x10...should I go with one of those or stick with the 4:3? The 4:3 mode seems to be stretched to fit a
-
Can I change the location of endnotes?
I'm using endnotes in a document. Pages puts them at the absolute end. In this case, that's after the bibliography. I want them before the bibliography, but I can't find a way to move them. Is there one? Thanks, Eric Weir
-
Firefox 29 sucks! There was NO REASON to change it. Now, I have to use IE! I hate IE! It's slow , it crashes and does not work with Hotmail. This is the next worst thing to happen since Outlook took over Hotmail.
-
IWork 9.0.4 available for download
Via Software Update of from this page: http://support.apple.com/kb/DL1097
-
Saving validation results for later processing
Greetings! I'm trying to come up w/ a way to save validation results (found server side in this.Details.ValidationResults) to an attached entity so that they can be worked by a user at a later time. Has anyone implemented such a pattern? I can't se