Table creation with partition
following is the table creation script with partition
CREATE TABLE customer_entity_temp (
BRANCH_ID NUMBER (4),
ACTIVE_FROM_YEAR VARCHAR2 (4),
ACTIVE_FROM_MONTH VARCHAR2 (3),
partition by range (ACTIVE_FROM_YEAR,ACTIVE_FROM_MONTH)
(partition yr7_1999 values less than ('1999',TO_DATE('Jul','Mon')),
partition yr12_1999 values less than ('1999',TO_DATE('Dec','Mon')),
it gives an error
ORA-14036: partition bound value too large for column
but if I increase the size of the ACTIVE_FROM_MONTH column to 9 , the script works and creates the table. Why is it so ?
Also, by creating a table in this way and populating the table data in their respective partitions, all rows with month less than "JULY" will go in yr7_1999 partition and all rows with month value between "JUL" and "DEC" will go in the second partition yr12_1999 , where will the data with month value equal to "DEC" go?
Plz help me in solving this problem
thanks n regards
Moloy
Hi,
You declared ACTIVE_FROM_MONTH VARCHAR2 (3) and you try to check it against a date in your partitionning clause:TO_DATE('Jul','Mon')so you should first check your data model and what you are trying to achieve exactly.
With such a partition decl, you will not be able to insert dates from december 1998 included and onward. The values are stricly less than (<) not less or equal(<=) hence such lines can't be inserted. I'd advise you to check the MAXVALUE value jocker and the ENABLE ROW MOVEMENT partitionning clause.
Regards,
Yoann.
Similar Messages
-
Dynamic Table Creation with the sortable collection model in the bean.
Hi all,
My requirement: I want to create the table dynamically with the set of rows. This set of rows will be holded in the Bean for the sortable variable.
(private SortableModel viewDeleteCollectionModel)
Issue: "_No Data to Display_" is coming in the Dynamic Table. Whereas when i checked with the collection model row count it is showing the correct no of rows.
Code for Dynamic Table in JSFF:
<af:table varStatus="rowStat" value="#{ViewHistoryDelete.viewDeleteCollectionModel}"
rows="#{ViewHistoryDelete.viewDeleteCollectionModel.rowCount}"
rowSelection="single" width="99%" var="row" id="t2" summary=" " >
<af:forEach items="#{ViewHistoryDelete.viewDeleteColumnNames}" var="name">
<af:column headerText="#{name}" sortable="true" sortProperty="#{name}" id="pt_c1">
<af:inputText value="#{row[name]}" label="#{row[name]}" readOnly="true" id="it2"/>
</af:column>
</af:forEach>
</af:table>
Bean Code:
Varaible DEclaration : private SortableModel viewDeleteCollectionModel;
Row assignig to collection model : viewDeleteCollectionModel = new SortableModel(new ArrayList<Map<String, Object>>());
((List<Map<String, Object>>) viewDeleteCollectionModel.getWrappedData()).addAll(viewDeletedData);
viewHistoryDelete.setViewDeleteCollectionModel(viewDeleteCollectionModel);
After this iam checked by putting sop for the table row count
iam getting actual no.of rows. But to the screen it is not showing the rows.
Any solution............
reg,
bakkia.
Edited by: Bakkia on Oct 11, 2011 11:20 PMDid you find solution to this problem ?
-
Create table as with partitions
Hi,
Is there a way to create table1 same as table2 with the exact partition and subpartition names? I need to exchange the partitions between the tables. Move the data from table1 to table2 and drop the subpartition from table1.Any help is appreciated.
Thanks
SCHi,
Thanks for reply. I tried dbms_metadata.get_ddl and it worked but lists all the partitions and subpartitions. table2 is partitioned by date and serviceid. Is there a way to generate the ddl for specific partitions of table2 and not all the partitions and subpartitions?
Thank You
SC -
Why dynamic table creation with struts working only for JDK1.3.1_02 version
Row
import java.util.Vector;
public class Row
private static int colsize;
private Column[] columns;
public void setColumns(Column[] columns)
System.out.println("SetColumns");
this.columns = columns;
public void setColumn(int i, Column column)
System.out.println("setting"+ i+"th column"+column);
public Column[] getColumns()
return null;
public Column getColumns(int i)
System.out.println("Column"+i);
System.out.println("Colsize"+colsize);
if(columns == null)
columns= new Column[colsize];
if(columns[i] == null)
columns[i] = new Column();
return columns;
public int getColsize()
return colsize;
public static void setColsize(int size)
colsize = size;
Column:
public class Column
private String value;
public void setValue(String value)
System.out.println("Value="+value);
this.value = value;
public String getValue()
return value;
ApplicationResources:
button.cancel=Cancel
button.confirm=Confirm
button.reset=Reset
button.save=Save
database.load=Cannot load database from {0}
error.database.missing=<li>User database is missing, cannot validate logon credentials</li>
error.fromAddress.format=<li>Invalid format for From Address</li>
error.fromAddress.required=<li>From Address is required</li>
error.fullName.required=<li>Full Name is required</li>
error.host.required=<li>Mail Server is required</li>
error.noSubscription=<li>No Subscription bean in user session</li>
error.password.required=<li>Password is required</li>
error.password2.required=<li>Confirmation password is required</li>
error.password.match=<li>Password and confirmation password must match</li>
error.password.mismatch=<li>Invalid username and/or password, please try again</li>
error.replyToAddress.format=<li>Invalid format for Reply To Address</li>
error.transaction.token=<li>Cannot submit this form out of order</li>
error.type.invalid=<li>Server Type must be 'imap' or 'pop3'</li>
error.type.required=<li>Server Type is required</li>
error.username.required=<li>Username is required</li>
error.username.unique=<li>That username is already in use - please select another</li>
errors.footer=</ul><hr>
errors.header=<h3><font color="red">Validation Error</font></h3>You must correct the following error(s) before proceeding:<ul>
errors.ioException=I/O exception rendering error messages: {0}
heading.autoConnect=Auto
heading.subscriptions=Current Subscriptions
heading.host=Host Name
heading.user=User Name
heading.type=Server Type
heading.action=Action
index.heading=MailReader Demonstration Application Options
index.logon=Log on to the MailReader Demonstration Application
index.registration=Register with the MailReader Demonstration Application
index.title=MailReader Demonstration Application (Struts 1.0-b1)
index.tour=A Walking Tour of the Example Application
linkSubscription.io=I/O Error: {0}
linkSubscription.noSubscription=No subscription under attribute {0}
linkUser.io=I/O Error: {0}
linkUser.noUser=No user under attribute {0}
logon.title=MailReader Demonstration Application - Logon
mainMenu.heading=Main Menu Options for
mainMenu.logoff=Log off MailReader Demonstration Application
mainMenu.registration=Edit your user registration profile
mainMenu.title=MailReader Demonstration Application - Main Menu
option.imap=IMAP Protocol
option.pop3=POP3 Protocol
prompt.autoConnect=Auto Connect:
prompt.fromAddress=From Address:
prompt.fullName=Full Name:
prompt.mailHostname=Mail Server:
prompt.mailPassword=Mail Password:
prompt.mailServerType=Server Type:
prompt.mailUsername=Mail Username:
prompt.password=Password:
prompt.password2=(Repeat) Password:
prompt.replyToAddress=Reply To Address:
prompt.username=Username:
registration.addSubscription=Add
registration.deleteSubscription=Delete
registration.editSubscription=Edit
registration.title.create=Register for the MailReader Demostration Application
registration.title.edit=Edit Registration for the MailReader Demonstration Application
subscription.title.create=Create New Mail Subscription
subscription.title.delete=Delete Existing Mail Subscription
subscription.title.edit=Edit Existing Mail Subscription
LogonForm
import javax.servlet.http.HttpServletRequest;
import org.apache.struts.action.ActionError;
import org.apache.struts.action.ActionErrors;
import org.apache.struts.action.ActionForm;
import org.apache.struts.action.ActionMapping;
public class LogonForm extends ActionForm
private String username;
private String password;
private String errors;
public String getUsername()
return username;
public void setUsername(String username)
this.username = username;
public void setPassword(String password)
this.password = password;
public String getPassword()
return password;
public String getErrors()
return errors;
public void setErrors(String errors)
this.errors = errors;
public ActionErrors validate(ActionMapping mapping,
HttpServletRequest request) {
ActionErrors errors = new ActionErrors();
if ((username == null) || (username.length() < 1))
errors.add("username", new ActionError("error.username.required"));
if ((password == null) || (password.length() < 1))
errors.add("password", new ActionError("error.password.required"));
return errors;
TableForm
import org.apache.struts.action.ActionForm;
import java.util.Vector;
public class TableForm extends ActionForm
private static int rowsize;
private Row[] rows;
public Row getRows(int i)
System.out.println("Row"+i);
System.out.println("Rowsize"+rowsize);
if(rows == null)
rows = new Row[rowsize];
if(rows[i] == null)
rows[i] = new Row();
return rows[i];
public Row[] getRows()
return null;
public void setRows(Row[] rows)
System.out.println("SetRows");
// this.rows=rows;
public static void setRowsize(int size)
rowsize = size;
public int getRowSize()
return rowsize;
LogonAction
import java.io.IOException;
import java.util.Hashtable;
import java.util.Locale;
import javax.servlet.RequestDispatcher;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;
import javax.servlet.http.HttpServletResponse;
import org.apache.struts.action.Action;
import org.apache.struts.action.ActionError;
import org.apache.struts.action.ActionErrors;
import org.apache.struts.action.ActionForm;
import org.apache.struts.action.ActionForward;
import org.apache.struts.action.ActionMapping;
import org.apache.struts.action.ActionServlet;
import org.apache.struts.util.MessageResources;
public class LogonAction extends Action
public ActionForward execute(ActionMapping mapping,
ActionForm form,
HttpServletRequest request,
HttpServletResponse response)
throws IOException, ServletException {
LogonForm logonForm = (LogonForm) form;
System.out.println(logonForm);
System.out.println(logonForm.getUsername());
System.out.println(logonForm.getPassword());
if(logonForm.getUsername().equals("test") && logonForm.getPassword().equals("test"))
//TableForm tform = new TableForm();
//tform.setRowsize(2);
//tform.getRows(0).setColsize(2);
//tform.getRows(1).setColsize(2);
//request.getSession().setAttribute("tableForm",tform);
System.out.println("Table Form setRowSize");
TableForm.setRowsize(2);
System.out.println("Table Form set ColSize");
Row.setColsize(2);
System.out.println("Returning success");
return mapping.findForward("success");
else
ActionErrors errors = new ActionErrors();
errors.add("password",
new ActionError("error.password.mismatch"));
saveErrors(request, errors);
//logonForm.setErrors("LoginError");
return mapping.findForward("failure");
<?xml version="1.0" encoding="ISO-8859-1" ?>
<!DOCTYPE struts-config PUBLIC
"-//Apache Software Foundation//DTD Struts Configuration 1.0//EN"
"http://jakarta.apache.org/struts/dtds/struts-config_1_0.dtd">
<!--
This is the Struts configuration file for the example application,
using the proposed new syntax.
NOTE: You would only flesh out the details in the "form-bean"
declarations if you had a generator tool that used them to create
the corresponding Java classes for you. Otherwise, you would
need only the "form-bean" element itself, with the corresponding
"name" and "type" attributes.
-->
<struts-config>
<form-beans>
<!-- Logon form bean -->
<form-bean name="logonForm"
type="LogonForm"/>
<form-bean name="tableForm"
type="TableForm"/>
<form-bean name="profileForm"
type="ProfileForm"/>
</form-beans>
<global-forwards>
<forward name="success" path="/Profile.jsp"/>
</global-forwards>
<!-- ========== Action Mapping Definitions ============================== -->
<action-mappings>
<!-- Edit user registration -->
<action path="/logon"
type="LogonAction"
name="logonForm"
scope="request"
validate="false"
input="/Test.jsp">
<forward name="success" path="/Table.jsp"/>
<forward name="failure" path="/Test.jsp"/>
</action>
<action path="/table"
type="TableAction"
name="tableForm"
scope="request"
validate="false">
<forward name="success" path="/Bean.jsp"/>
<forward name="failure" path="/Table.jsp"/>
</action>
<action path="/profile"
type="ProfileAction"
name="profileForm"
scope="request"
validate="false"
parameter="method">
<forward name="edit" path="/EditProfile.jsp"/>
<forward name="show" path="/Profile.jsp"/>
</action>
</action-mappings>
</struts-config>
Test.jsp
<%@ taglib uri="/WEB-INF/struts-html.tld" prefix="html" %>
<html:html locale="true">
<html:form action="/logon" >
<center>
<table>
<tr>
<td> Username </td>
<td> <html:text property="username" size="16" maxlength="16"/> </td>
<td> <html:errors property="username" /> </td>
</tr>
<tr>
<td> Password </td>
<td> <html:password property="password" size="16" maxlength="16"
redisplay="false"/> </td>
<td><html:errors property="password" /> </td>
</tr>
</table>
</center>
<center> <html:submit property="submit" value="Submit"/> </center>
</html:form>
</html:html>
Table.jsp
<%@ taglib uri="/WEB-INF/struts-html.tld" prefix="html" %>
<html:html locale="true">
<html:form action="/table" >
<center>
<table>
<tr>
<td> <html:text property="rows[0].columns[0].value" /> </td>
<td> <html:text property="rows[0].columns[1].value" /></td>
</tr>
<tr>
<td> <html:text property="rows[1].columns[0].value" /> </td>
<td> <html:text property="rows[1].columns[1].value" /></td>
</tr>
</table>
</center>
<center> <html:submit property="submit" value="Submit"/> </center>
</html:form>
</html:html>The above application runs only with JDK1.3.1_02 and not with any other version. This application is creating dynamic table using struts.
Can anybody help me on the same
also appending web.xml contents:
<?xml version="1.0" ?>
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>
<!-- Action Servlet Configuration -->
<servlet>
<servlet-name>action</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet</servlet-class>
<init-param>
<param-name>application</param-name>
<param-value>ApplicationResources</param-value>
</init-param>
<init-param>
<param-name>config</param-name>
<param-value>/WEB-INF/struts-config.xml</param-value>
</init-param>
<init-param>
<param-name>debug</param-name>
<param-value>2</param-value>
</init-param>
<init-param>
<param-name>detail</param-name>
<param-value>2</param-value>
</init-param>
<init-param>
<param-name>validate</param-name>
<param-value>true</param-value>
</init-param>
<load-on-startup>2</load-on-startup>
</servlet>
<!-- Action Servlet Mapping -->
<servlet-mapping>
<servlet-name>action</servlet-name>
<url-pattern>*.do</url-pattern>
</servlet-mapping>
<!--Welcome file list starts here -->
<welcome-file-list>
<welcome-file>
/test.jsp
</welcome-file>
</welcome-file-list>
<!-- Struts Tag Library Descriptors -->
<taglib>
<taglib-uri>/WEB-INF/struts-bean.tld</taglib-uri>
<taglib-location>/WEB-INF/struts-bean.tld</taglib-location>
</taglib>
<taglib>
<taglib-uri>/WEB-INF/struts-html.tld</taglib-uri>
<taglib-location>/WEB-INF/struts-html.tld</taglib-location>
</taglib>
<taglib>
<taglib-uri>/WEB-INF/struts-logic.tld</taglib-uri>
<taglib-location>/WEB-INF/struts-logic.tld</taglib-location>
</taglib>
</web-app>
validate-rules.xml
<!DOCTYPE form-validation PUBLIC
"-//Apache Software Foundation//DTD Commons Validator Rules Configuration 1.0//EN"
"http://jakarta.apache.org/commons/dtds/validator_1_0.dtd">
<!--
This file contains the default Struts Validator pluggable validator
definitions. It should be placed somewhere under /WEB-INF and
referenced in the struts-config.xml under the plug-in element
for the ValidatorPlugIn.
<plug-in className="org.apache.struts.validator.ValidatorPlugIn">
<set-property property="pathnames" value="/WEB-INF/validator-rules.xml,
/WEB-INF/validation.xml"/>
</plug-in>
These are the default error messages associated with
each validator defined in this file. They should be
added to your projects ApplicationResources.properties
file or you can associate new ones by modifying the
pluggable validators msg attributes in this file.
# Struts Validator Error Messages
errors.required={0} is required.
errors.minlength={0} can not be less than {1} characters.
errors.maxlength={0} can not be greater than {1} characters.
errors.invalid={0} is invalid.
errors.byte={0} must be a byte.
errors.short={0} must be a short.
errors.integer={0} must be an integer.
errors.long={0} must be a long.
errors.float={0} must be a float.
errors.double={0} must be a double.
errors.date={0} is not a date.
errors.range={0} is not in the range {1} through {2}.
errors.creditcard={0} is an invalid credit card number.
errors.email={0} is an invalid e-mail address.
-->
<form-validation>
<global>
<validator name="required"
classname="org.apache.struts.validator.FieldChecks"
method="validateRequired"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
msg="errors.required">
<javascript><![CDATA[
function validateRequired(form) {
var isValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oRequired = new required();
for (x in oRequired) {
var field = form[oRequired[x][0]];
if (field.type == 'text' ||
field.type == 'textarea' ||
field.type == 'file' ||
field.type == 'select-one' ||
field.type == 'radio' ||
field.type == 'password') {
var value = '';
// get field's value
if (field.type == "select-one") {
var si = field.selectedIndex;
if (si >= 0) {
value = field.options[si].value;
} else {
value = field.value;
if (trim(value).length == 0) {
if (i == 0) {
focusField = field;
fields[i++] = oRequired[x][1];
isValid = false;
if (fields.length > 0) {
focusField.focus();
alert(fields.join('\n'));
return isValid;
// Trim whitespace from left and right sides of s.
function trim(s) {
return s.replace( /^\s*/, "" ).replace( /\s*$/, "" );
]]>
</javascript>
</validator>
<validator name="requiredif"
classname="org.apache.struts.validator.FieldChecks"
method="validateRequiredIf"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
org.apache.commons.validator.Validator,
javax.servlet.http.HttpServletRequest"
msg="errors.required">
</validator>
<validator name="minlength"
classname="org.apache.struts.validator.FieldChecks"
method="validateMinLength"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.minlength">
<javascript><![CDATA[
function validateMinLength(form) {
var isValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oMinLength = new minlength();
for (x in oMinLength) {
var field = form[oMinLength[x][0]];
if (field.type == 'text' ||
field.type == 'textarea') {
var iMin = parseInt(oMinLength[x][2]("minlength"));
if ((trim(field.value).length > 0) && (field.value.length < iMin)) {
if (i == 0) {
focusField = field;
fields[i++] = oMinLength[x][1];
isValid = false;
if (fields.length > 0) {
focusField.focus();
alert(fields.join('\n'));
return isValid;
}]]>
</javascript>
</validator>
<validator name="maxlength"
classname="org.apache.struts.validator.FieldChecks"
method="validateMaxLength"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.maxlength">
<javascript><![CDATA[
function validateMaxLength(form) {
var isValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oMaxLength = new maxlength();
for (x in oMaxLength) {
var field = form[oMaxLength[x][0]];
if (field.type == 'text' ||
field.type == 'textarea') {
var iMax = parseInt(oMaxLength[x][2]("maxlength"));
if (field.value.length > iMax) {
if (i == 0) {
focusField = field;
fields[i++] = oMaxLength[x][1];
isValid = false;
if (fields.length > 0) {
focusField.focus();
alert(fields.join('\n'));
return isValid;
}]]>
</javascript>
</validator>
<validator name="mask"
classname="org.apache.struts.validator.FieldChecks"
method="validateMask"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.invalid">
<javascript><![CDATA[
function validateMask(form) {
var isValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oMasked = new mask();
for (x in oMasked) {
var field = form[oMasked[x][0]];
if ((field.type == 'text' ||
field.type == 'textarea') &&
(field.value.length > 0)) {
if (!matchPattern(field.value, oMasked[x][2]("mask"))) {
if (i == 0) {
focusField = field;
fields[i++] = oMasked[x][1];
isValid = false;
if (fields.length > 0) {
focusField.focus();
alert(fields.join('\n'));
return isValid;
function matchPattern(value, mask) {
return mask.exec(value);
}]]>
</javascript>
</validator>
<validator name="byte"
classname="org.apache.struts.validator.FieldChecks"
method="validateByte"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.byte"
jsFunctionName="ByteValidations">
<javascript><![CDATA[
function validateByte(form) {
var bValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oByte = new ByteValidations();
for (x in oByte) {
var field = form[oByte[x][0]];
if (field.type == 'text' ||
field.type == 'textarea' ||
field.type == 'select-one' ||
field.type == 'radio') {
var value = '';
// get field's value
if (field.type == "select-one") {
var si = field.selectedIndex;
if (si >= 0) {
value = field.options[si].value;
} else {
value = field.value;
if (value.length > 0) {
if (!isAllDigits(value)) {
bValid = false;
if (i == 0) {
focusField = field;
fields[i++] = oByte[x][1];
} else {
var iValue = parseInt(value);
if (isNaN(iValue) || !(iValue >= -128 && iValue <= 127)) {
if (i == 0) {
focusField = field;
fields[i++] = oByte[x][1];
bValid = false;
if (fields.length > 0) {
focusField.focus();
alert(fields.join('\n'));
return bValid;
}]]>
</javascript>
</validator>
<validator name="short"
classname="org.apache.struts.validator.FieldChecks"
method="validateShort"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.short"
jsFunctionName="ShortValidations">
<javascript><![CDATA[
function validateShort(form) {
var bValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oShort = new ShortValidations();
for (x in oShort) {
var field = form[oShort[x][0]];
if (field.type == 'text' ||
field.type == 'textarea' ||
field.type == 'select-one' ||
field.type == 'radio') {
var value = '';
// get field's value
if (field.type == "select-one") {
var si = field.selectedIndex;
if (si >= 0) {
value = field.options[si].value;
} else {
value = field.value;
if (value.length > 0) {
if (!isAllDigits(value)) {
bValid = false;
if (i == 0) {
focusField = field;
fields[i++] = oShort[x][1];
} else {
var iValue = parseInt(value);
if (isNaN(iValue) || !(iValue >= -32768 && iValue <= 32767)) {
if (i == 0) {
focusField = field;
fields[i++] = oShort[x][1];
bValid = false;
if (fields.length > 0) {
focusField.focus();
alert(fields.join('\n'));
return bValid;
}]]>
</javascript>
</validator>
<validator name="integer"
classname="org.apache.struts.validator.FieldChecks"
method="validateInteger"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.integer"
jsFunctionName="IntegerValidations">
<javascript><![CDATA[
function validateInteger(form) {
var bValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oInteger = new IntegerValidations();
for (x in oInteger) {
var field = form[oInteger[x][0]];
if (field.type == 'text' ||
field.type == 'textarea' ||
field.type == 'select-one' ||
field.type == 'radio') {
var value = '';
// get field's value
if (field.type == "select-one") {
var si = field.selectedIndex;
if (si >= 0) {
value = field.options[si].value;
} else {
value = field.value;
if (value.length > 0) {
if (!isAllDigits(value)) {
bValid = false;
if (i == 0) {
focusField = field;
fields[i++] = oInteger[x][1];
} else {
var iValue = parseInt(value);
if (isNaN(iValue) || !(iValue >= -2147483648 && iValue <= 2147483647)) {
if (i == 0) {
focusField = field;
fields[i++] = oInteger[x][1];
bValid = false;
if (fields.length > 0) {
focusField.focus();
alert(fields.join('\n'));
return bValid;
function isAllDigits(argvalue) {
argvalue = argvalue.toString();
var validChars = "0123456789";
var startFrom = 0;
if (argvalue.substring(0, 2) == "0x") {
validChars = "0123456789abcdefABCDEF";
startFrom = 2;
} else if (argvalue.charAt(0) == "0") {
validChars = "01234567";
startFrom = 1;
} else if (argvalue.charAt(0) == "-") {
startFrom = 1;
for (var n = startFrom; n < argvalue.length; n++) {
if (validChars.indexOf(argvalue.substring(n, n+1)) == -1) return false;
return true;
}]]>
</javascript>
</validator>
<validator name="long"
classname="org.apache.struts.validator.FieldChecks"
method="validateLong"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.long"/>
<validator name="float"
classname="org.apache.struts.validator.FieldChecks"
method="validateFloat"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.float"
jsFunctionName="FloatValidations">
<javascript><![CDATA[
function validateFloat(form) {
var bValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oFloat = new FloatValidations();
for (x in oFloat) {
var field = form[oFloat[x][0]];
if (field.type == 'text' ||
field.type == 'textarea' ||
field.type == 'select-one' ||
field.type == 'radio') {
var value = '';
// get field's value
if (field.type == "select-one") {
var si = field.selectedIndex;
if (si >= 0) {
value = field.options[si].value;
} else {
value = field.value;
if (value.length > 0) {
// remove '.' before checking digits
var tempArray = value.split('.');
var joinedString= tempArray.join('');
if (!isAllDigits(joinedString)) {
bValid = false;
if (i == 0) {
focusField = field;
fields[i++] = oFloat[x][1];
} else {
var iValue = parseFloat(value);
if (isNaN(iValue)) {
if (i == 0) {
focusField = field;
fields[i++] = oFloat[x][1];
bValid = false;
if (fields.length > 0) {
focusField.focus();
alert(fields.join('\n'));
return bValid;
}]]>
</javascript>
</validator>
<validator name="double"
classname="org.apache.struts.validator.FieldChecks"
method="validateDouble"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.double"/>
<validator name="date"
classname="org.apache.struts.validator.FieldChecks"
method="validateDate"
methodParams="java.lang.Object,
org.apache.commons.validator.ValidatorAction,
org.apache.commons.validator.Field,
org.apache.struts.action.ActionErrors,
javax.servlet.http.HttpServletRequest"
depends=""
msg="errors.date"
jsFunctionName="DateValidations">
<javascript><![CDATA[
function validateDate(form) {
var bValid = true;
var focusField = null;
var i = 0;
var fields = new Array();
oDate = new DateValidations();
for (x in oDate) {
var value = form[oDate[x][0]].value;
var datePattern = oDate[x][2]("datePatternStrict");
if ((form[oDate[x][0]].type == 'text' ||
form[oDate[x][0]].type == 'textarea') &&
(value.length > 0) &&
(datePattern.length > 0)) {
var MONTH = "MM";
var DAY = "dd";
var YEAR = "yyyy";
var orderMonth = datePattern.indexOf(MONTH);
var orderDay = datePattern.indexOf(DAY);
var orderYear = datePattern.indexOf(YEAR);
if ((orderDay < orderYear && orderDay > orderMonth)) {
var iDelim1 = orderMonth + MONTH.length;
var iDelim2 = orderDay + DAY.length;
var delim1 = datePattern.substring(iDelim1, iDelim1 + 1);
var delim2 = datePattern.substring(iDelim2, iDelim2 + 1);
if (iDelim1 == orderDay && iDelim2 == orderYear) {
dateRegexp = new RegExp("^(\\d{2})(\\d{2})(\\d{4})$");
} else if (iDelim1 == orderDay) {
dateRegexp = new RegExp("^(\\d{2})(\\d{2})[" + delim2 + "](\\d{4})$");
} else if (iDelim2 == orderYear) {
dateRegexp = new RegExp("^(\\d{2})[" + delim1 + "](\\d{2})(\\d{4})$");
} else {
dateRegexp = new RegExp("^(\\d{2})[" + delim1 + "](\\d{2})[" + delim2 + "](\\d{4})$");
var matched = dateRegexp.exec(value);
if(matched != null) {
if (!isValidDate(matched[2], matched[1], matched[3])) {
if (i == 0) {
focusField = form[oDate[x][0]];
fields[i++] = oDate[x][1];
bValid = false;
} else {
if (i == 0) {
focusField = form[oDate[x][0]];
fields[i++] = oDate[x][1];
bValid = false;
} else if ((orderMonth < orderYear && orderMonth > orderDay)) {
var iDelim1 = orderDay + DAY.length;
var iDelim2 = orderMonth + MONTH.length;
var delim1 = datePattern.substring(iDelim1, iDelim1 + 1);
var delim2 = datePattern.substring(iDelim2, iDelim2 + 1);
if (iDelim1 == orderMonth && iDelim2 == orderYear) {
dateRegexp = new RegExp("^(\\d{2})(\\d{2})(\\d{4})$");
} else if (iDelim1 == orderMonth) {
dateRegexp = new RegExp("^(\\d{2})(\\d{2})[" + delim2 + "](\\d{4})$");
} else if (iDelim2 == orderYear) {
dateRegexp = new RegExp("^(\\d{2})[" + delim1 + "](\\d{2})(\\d{4})$");
} else {
dateRegexp = new RegExp("^(\\d{2})[" + delim1 + "](\\d{2})[" + delim2 + "](\\d{4})$");
var matched = dateRegexp.exec(value);
if(matched != null) {
if (!isValidDate(matched[1], matched[2], matched[3])) {
if (i == 0) {
focusField = form[oDate[x][0]];
fields[i++] = oDate[x][1];
bValid = false;
} else {
if (i == 0) {
focusField = form[oDate[x][0]];
fields[i++] = oDate[x][1];
bValid = false;
} else if ((orderMonth > orderYear && orderMonth < orderDay)) {
var iDelim1 = orderYear + YEAR.length;
var iDelim2 = orderMonth + MONTH.length;
var delim1 = datePattern.substring(iDelim1, iDelim1 + 1);
var delim2 = datePattern.substring(iDelim2, iDelim2 + 1);
if (iDelim1 == orderMonth && iDelim2 == orderDay) {
dateRegexp = new RegExp("^(\\d{4})(\\d{2})(\\d{2})$");
} else if (iDelim1 == orderMonth) {
dateRegexp = new RegExp("^(\\d{4})(\\d{2})[" + delim2 + "](\\d{2})$");
} else if (iDelim2 == orderDay) {
dateRegexp = new RegExp("^(\\d{4})[" + delim1 + "](\\d{2})(\\d{2})$");
} else {
dateRegexp = new Reg -
Problem with table creation using CTAS parallel hint
Hi,
We have a base table (CARDS_TAB) with 1,083,565,232 rows, and created a replica table called T_CARDS_NEW_201111. But the count in new table is 1,083,566,976 the difference is 1744 additional row. I have no idea how the new table can contain more rows compared to original table!!
Oracle version is 11.2.0.2.0.
Both table count were taken after table creation. Script that was used to create replica table is:
CREATE TABLE T_CARDS_NEW_201111
TABLESPACE T_DATA_XLARGE07
PARTITION BY RANGE (CPS01_DATE_GENERATED)
SUBPARTITION BY LIST (CPS01_CURRENT_STATUS)
SUBPARTITION TEMPLATE
(SUBPARTITION T_NULL VALUES (NULL),
SUBPARTITION T_0 VALUES (0),
SUBPARTITION T_1 VALUES (1),
SUBPARTITION T_3 VALUES (3),
SUBPARTITION T_OTHERS VALUES (DEFAULT)
PARTITION T_200612 VALUES LESS THAN (TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE T_DATA_XLARGE07
( SUBPARTITION T_200612_T_NULL VALUES (NULL) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200612_T_0 VALUES (0) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200612_T_1 VALUES (1) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200612_T_3 VALUES (3) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200612_T_OTHERS VALUES (DEFAULT) TABLESPACE T_DATA_XLARGE07 ),
PARTITION T_200701 VALUES LESS THAN (TO_DATE(' 2007-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE T_DATA_XLARGE07
( SUBPARTITION T_200701_T_NULL VALUES (NULL) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200701_T_0 VALUES (0) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200701_T_1 VALUES (1) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200701_T_3 VALUES (3) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200701_T_OTHERS VALUES (DEFAULT) TABLESPACE T_DATA_XLARGE07 )
PARTITION T_201211 VALUES LESS THAN (TO_DATE(' 2012-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE T_DATA_XLARGE07
( SUBPARTITION T_201211_T_NULL VALUES (NULL) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201211_T_0 VALUES (0) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201211_T_1 VALUES (1) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201211_T_3 VALUES (3) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201211_T_OTHERS VALUES (DEFAULT) TABLESPACE T_DATA_XLARGE07 ),
PARTITION T_201212 VALUES LESS THAN (TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE T_DATA_XLARGE07
( SUBPARTITION T_201212_T_NULL VALUES (NULL) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201212_T_0 VALUES (0) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201212_T_1 VALUES (1) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201212_T_3 VALUES (3) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201212_T_OTHERS VALUES (DEFAULT) TABLESPACE T_DATA_XLARGE07 )
NOCACHE
NOPARALLEL
MONITORING
ENABLE ROW MOVEMENT
AS
SELECT /*+ PARALLEL (T,40) */ SERIAL_NUMBER ,
PIN_NUMBER ,
CARD_TYPE ,
DENOMINATION ,
DATE_GENERATED ,
LOG_PHY_IND ,
CARD_ID ,
OUTLET_CODE ,
MSISDN ,
BATCH_NUMBER ,
DATE_SOLD ,
DIST_CHANNEL ,
DATE_CEASED ,
DATE_PRINTED ,
DATE_RECHARGE ,
LOGICAL_ORDER_NR ,
DATE_AVAILABLE ,
CURRENT_STATUS ,
ACCESS_CODE from CARDS_TAB T
/Also base table CARDS_TAB has a primary key on SERIAL_NUMBER column. when trying to create a primary key on new table it throws exception:
ALTER TABLE T_CARDS_NEW_201111 ADD
CONSTRAINT T_PK2_1
PRIMARY KEY (SERIAL_NUMBER) USING INDEX
TABLESPACE T_INDEX_XLARGE07
PARALLEL 10 NOLOGGING;
CONSTRAINT TP_PK2_1
ERROR at line 2:
ORA-02437: cannot validate (T_PK2_1) - primary key violatedThanks in advance.
With Regards,
Farooq AbdullaFor parallel processing the documentation suggests the use of automatic degree of parallelism (determined by the system at run time) or choosing a power of 2 value
Look at Florian's post in yours presently neighbour post How to Delete Duplicate rows from a Table to locate the violations (seemingly due to parallel processing)
Regards
Etbin -
Is it possible to create table with partition in compress mode
Hi All,
I want to create a table in compress option, with partitions. When i create with partitions the compression isnt enabled, but with noramal table creation the compression option is enables.
My question is:
cant we create a table with partition/subpartition in compress mode..? Please help.
Below is the code that i have used for table creation.
CREATE TABLE temp
TRADE_ID NUMBER,
SRC_SYSTEM_ID VARCHAR2(60 BYTE),
SRC_TRADE_ID VARCHAR2(60 BYTE),
SRC_TRADE_VERSION VARCHAR2(60 BYTE),
ORIG_SRC_SYSTEM_ID VARCHAR2(30 BYTE),
TRADE_STATUS VARCHAR2(60 BYTE),
TRADE_TYPE VARCHAR2(60 BYTE),
SECURITY_TYPE VARCHAR2(60 BYTE),
VOLUME NUMBER,
ENTRY_DATE DATE,
REASON VARCHAR2(255 BYTE),
TABLESPACE data
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
NOLOGGING
COMPRESS
NOCACHE
PARALLEL (DEGREE 6 INSTANCES 1)
MONITORING
PARTITION BY RANGE (TRADE_DATE)
SUBPARTITION BY LIST (SRC_SYSTEM_ID)
SUBPARTITION TEMPLATE
(SUBPARTITION SALES VALUES ('sales'),
SUBPARTITION MAG VALUES ('MAG'),
SUBPARTITION SPI VALUES ('SPI', 'SPIM', 'SPIIA'),
SUBPARTITION FIS VALUES ('FIS'),
SUBPARTITION GD VALUES ('GS'),
SUBPARTITION ST VALUES ('ST'),
SUBPARTITION KOR VALUES ('KOR'),
SUBPARTITION BLR VALUES ('BLR'),
SUBPARTITION SUT VALUES ('SUT'),
SUBPARTITION RM VALUES ('RM'),
SUBPARTITION DEFAULT VALUES (default)
PARTITION RMS_TRADE_DLY_MAX VALUES LESS THAN (MAXVALUE)
LOGGING
TABLESPACE data
( SUBPARTITION TS_MAX_SALES VALUES ('SALES') TABLESPACE data,
SUBPARTITION TS_MAX_MAG VALUES ('MAG') TABLESPACE data,
SUBPARTITION TS_MAX_SPI VALUES ('SPI', 'SPIM', 'SPIIA') TABLESPACE data,
SUBPARTITION TS_MAX_FIS VALUES ('FIS') TABLESPACE data,
SUBPARTITION TS_MAX_GS VALUES ('GS') TABLESPACE data,
SUBPARTITION TS_MAX_ST VALUES ('ST') TABLESPACE data,
SUBPARTITION TS_MAX_KOR VALUES ('KOR') TABLESPACE data,
SUBPARTITION TS_MAX_BLR VALUES ('BLR') TABLESPACE data,
SUBPARTITION TS_MAX_SUT VALUES ('SUT') TABLESPACE data,
SUBPARTITION TS_MAX_RM VALUES ('RM') TABLESPACE data,
SUBPARTITION TS_MAX_DEFAULT VALUES (default) TABLESPACE data)); Edited by: user11942774 on 8 Dec, 2011 5:17 AMuser11942774 wrote:
I want to create a table in compress option, with partitions. When i create with partitions the compression isnt enabled, but with noramal table creation the compression option is enables. First of all your CREATE TABLE statement is full of syntax errors. Next time test it before posting - we don't want to spend time on fixing things not related to your question.
Now, I bet you check COMPRESSION value of partitioned table same way you do it for a non-partitioned table - in USER_TABLES - and therefore get wrong results. Since compreesion can be enabled on individual partition level you need to check COMPRESSION in USER_TAB_PARTITIONS:
SQL> CREATE TABLE temp
2 (
3 TRADE_ID NUMBER,
4 SRC_SYSTEM_ID VARCHAR2(60 BYTE),
5 SRC_TRADE_ID VARCHAR2(60 BYTE),
6 SRC_TRADE_VERSION VARCHAR2(60 BYTE),
7 ORIG_SRC_SYSTEM_ID VARCHAR2(30 BYTE),
8 TRADE_STATUS VARCHAR2(60 BYTE),
9 TRADE_TYPE VARCHAR2(60 BYTE),
10 SECURITY_TYPE VARCHAR2(60 BYTE),
11 VOLUME NUMBER,
12 ENTRY_DATE DATE,
13 REASON VARCHAR2(255 BYTE),
14 TRADE_DATE DATE
15 )
16 TABLESPACE users
17 PCTUSED 0
18 PCTFREE 10
19 INITRANS 1
20 MAXTRANS 255
21 NOLOGGING
22 COMPRESS
23 NOCACHE
24 PARALLEL (DEGREE 6 INSTANCES 1)
25 MONITORING
26 PARTITION BY RANGE (TRADE_DATE)
27 SUBPARTITION BY LIST (SRC_SYSTEM_ID)
28 SUBPARTITION TEMPLATE
29 (SUBPARTITION SALES VALUES ('sales'),
30 SUBPARTITION MAG VALUES ('MAG'),
31 SUBPARTITION SPI VALUES ('SPI', 'SPIM', 'SPIIA'),
32 SUBPARTITION FIS VALUES ('FIS'),
33 SUBPARTITION GD VALUES ('GS'),
34 SUBPARTITION ST VALUES ('ST'),
35 SUBPARTITION KOR VALUES ('KOR'),
36 SUBPARTITION BLR VALUES ('BLR'),
37 SUBPARTITION SUT VALUES ('SUT'),
38 SUBPARTITION RM VALUES ('RM'),
39 SUBPARTITION DEFAULT_SUB VALUES (default)
40 )
41 (
42 PARTITION RMS_TRADE_DLY_MAX VALUES LESS THAN (MAXVALUE)
43 LOGGING
44 TABLESPACE users
45 ( SUBPARTITION TS_MAX_SALES VALUES ('SALES') TABLESPACE users,
46 SUBPARTITION TS_MAX_MAG VALUES ('MAG') TABLESPACE users,
47 SUBPARTITION TS_MAX_SPI VALUES ('SPI', 'SPIM', 'SPIIA') TABLESPACE users,
48 SUBPARTITION TS_MAX_FIS VALUES ('FIS') TABLESPACE users,
49 SUBPARTITION TS_MAX_GS VALUES ('GS') TABLESPACE users,
50 SUBPARTITION TS_MAX_ST VALUES ('ST') TABLESPACE users,
51 SUBPARTITION TS_MAX_KOR VALUES ('KOR') TABLESPACE users,
52 SUBPARTITION TS_MAX_BLR VALUES ('BLR') TABLESPACE users,
53 SUBPARTITION TS_MAX_SUT VALUES ('SUT') TABLESPACE users,
54 SUBPARTITION TS_MAX_RM VALUES ('RM') TABLESPACE users,
55 SUBPARTITION TS_MAX_DEFAULT VALUES (default) TABLESPACE users));
Table created.
SQL>
SQL>
SQL> SELECT PARTITION_NAME,
2 COMPRESSION
3 FROM USER_TAB_PARTITIONS
4 WHERE TABLE_NAME = 'TEMP'
5 /
PARTITION_NAME COMPRESS
RMS_TRADE_DLY_MAX ENABLED
SQL> SELECT COMPRESSION
2 FROM USER_TABLES
3 WHERE TABLE_NAME = 'TEMP'
4 /
COMPRESS
SQL> SY. -
ORA-00604 ORA-00904 When query partitioned table with partitioned indexes
Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
Query runs fine when query the partitioned table without partitioned indexes.
Here is the query.
SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
al27.accessory_code
FROM vlc.veh_vdc_accessorization_fact al1,
vlc.vdc_dim al2,
vlc.model_attribute_dim al7,
vlc.ppo_list_dim al18,
vlc.ppo_list_indiv_type_dim al23,
vlc.accy_type_dim al27
WHERE ( al2.vdc_id = al1.vdc_location_id
AND al7.model_attribute_id = al1.model_attribute_id
AND al18.mydppolist_id = al1.ppo_list_id
AND al23.mydppolist_id = al18.mydppolist_id
AND al23.mydaccytyp_id = al27.mydaccytyp_id
AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
AND al2.vdc_name IN
('PORT OF BALTIMORE',
'PORT OF JACKSONVILLE - LEXUS',
'PORT OF LONG BEACH',
'PORT OF NEWARK',
'PORT OF PORTLAND'
AND al27.accessory_code IN ('42', '43', '44', '45')
GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_codeI would recommend that you post this at the following OTN forum:
Database - General
General Database Discussions
and perhaps at:
Oracle Warehouse Builder
Warehouse Builder
The Oracle OLAP forum typically does not cover general data warehousing topics. -
Troubles editing tables with partitions
I'm running SQL Developer 1.5.3 against Oracle 10/11 databases and SQL Developer has trouble with my partitioned tables. Both the schema owner and sys users experience the same problems.
The first time I try to edit a table, I get an "Error Loading Objects" dialog with a NullPointException message. If I immediately turn around and try to edit the table again, I get the Edit Table dialog. That's annoying but there's at least a work-around.
Next, if I select the Indexes pane, I can view the first index but selecting another one results in an "Index Error on <table>" error dialog. The message is "There are no table partitions on which to define local index partitions". At this point, selecting any of the other panes (Columns, Primary Key, etc.) results in the same dialog. While the main Partitions tab shows my partitions, I cannot see them in the Edit Table dialog. In fact, the Partition Definitions and Subpartition Templates panes are blank.
Does anyone else see this behavior? Version 1.5.1 behaved the same way so it's not new.
Of course I've figured out how to do everything I need through SQL but it would be handy if I could just use the tool.
Thank you.Most of my tables are generated from a script so this morning I decided to just create a very basic partitioned table. It contained a NUMBER primary key and a TIMESTAMP(6) column to use with partitioning. That table worked just fine in SQL Developer.
At that point I tried to figure out what is different about my tables and I finally found the difference... Oracle Spatial. If I add an MDSYS.SDO_GEOMETRY column to my partitioned table, SQL Developer starts having issues.
I also have the GeoRaptor plugin installed so I had to wonder if it was interfering with SQL Developer. I couldn't find an option to uninstall an extension so I went into the sqldeveloper/extensions directory and removed GeoRaptorLibs and org.GeoRaptor.jar. GeoRaptor doesn't appear to be installed in SQL Developer anymore but I still see the same behavior.
It appears that there is an issue in SQL Developer with Oracle Spatial and partitioning. Can someone confirm this? -
Insert performance issue with Partitioned Table.....
Hi All,
I have a performance issue during with a table which is partitioned. without table being partitioned
it ran in less time but after partition it took more than double.
1) The table was created initially without any partition and the below insert took only 27 minuts.
Total Rec Inserted :- 2424233
PL/SQL procedure successfully completed.
Elapsed: 00:27:35.20
2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
Is there anyway i can achive the better performance during insert on this partitioned table?
[ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
with partitioning the table, it took 18 hours... ]
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4195045590
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
|* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
| 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
|* 3 | HASH JOIN | | | | | | |
| 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
| 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
PLAN_TABLE_OUTPUT
| 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
Predicate Information (identified by operation id):
1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
"A"."VENDOR_CD"="B"."COMPANY_NO")
3 - access(ROWID=ROWID)
Open C1;
Loop
Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
Forall I In 1..C_Rectype.Count
Insert test
col1,col2,col3)
Values
val1, val2,val3);
V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
Commit;
Exit When C_Rectype.Count = 0;
C_Rectype.delete;
End Loop;
End;
Total Rec Inserted :- 2424233
PL/SQL procedure successfully completed.
Elapsed: 00:51:01.22
Edited by: user520824 on Jul 16, 2010 9:16 AMI'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
I don't think there is a nologging hint :)
So, try something like
insert /*+ hints */ into ...
Select
A.Ing_Acct_Nbr, currency_Symbol,
Balance_Date, Company_No,
Substr(Account_No,1,8) Account_No,
Substr(Account_No,9,1) Typ_Cd ,
Substr(Account_No,10,1) Chk_Cd,
Td_Balance, Sd_Balance,
Sysdate, 'Sisadmin'
From Ideaal_Cons.Tb_Account_Master_Base A,
Ideaal_Staging.Tb_Sisadmin_Balance B
Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
And A.Vendor_Cd = b.company_no
;Edited by: riedelme on Jul 16, 2010 7:42 AM -
Problems with partition tables
Hi all,
I've got some problems with partition tables. The script at the bottom run but when I wanna insert some values it returns me an error
(ORA-06550: line 1, column 30: PL/SQL: ORA-06552: PL/SQL: Compilation unit analysis terminated
ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed
ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored)
and I can't understand why!
There's something incorrect in the script or not?
Please help me
Thanks in advance
Steve
CREATE TABLE TW_E_CUSTOMER_UNIFIED
ID_CUSTOMER_UNIFIED VARCHAR2 (27) NOT NULL ,
START_VALIDITY_DATE DATE NOT NULL ,
END_VALIDITY_DATE DATE ,
CUSTOMER_STATUS VARCHAR2 (255)
PARTITION BY RANGE (START_VALIDITY_DATE)
SUBPARTITION BY LIST (END_VALIDITY_DATE)
PARTITION M200909 VALUES LESS THAN (TO_DATE('20091001','YYYYMMDD'))
(SUBPARTITION M200909_N VALUES (NULL), SUBPARTITION M200909_NN VALUES (DEFAULT)),
PARTITION M200910 VALUES LESS THAN (TO_DATE('20091101','YYYYMMDD'))
(SUBPARTITION M200910_N VALUES (NULL), SUBPARTITION M200910_NN VALUES (DEFAULT)),
PARTITION M200911 VALUES LESS THAN (TO_DATE('20091201','YYYYMMDD'))
(SUBPARTITION M200911_N VALUES (NULL), SUBPARTITION M200911_NN VALUES (DEFAULT)),
PARTITION M200912 VALUES LESS THAN (TO_DATE('20100101','YYYYMMDD'))
(SUBPARTITION M200912_N VALUES (NULL), SUBPARTITION M200912_NN VALUES (DEFAULT)),
PARTITION M201001 VALUES LESS THAN (TO_DATE('20100201','YYYYMMDD'))
(SUBPARTITION M201001_N VALUES (NULL), SUBPARTITION M201001_NN VALUES (DEFAULT)),
PARTITION M201002 VALUES LESS THAN (TO_DATE('20100301','YYYYMMDD'))
(SUBPARTITION M201002_N VALUES (NULL), SUBPARTITION M201002_NN VALUES (DEFAULT)),
PARTITION M210001 VALUES LESS THAN (MAXVALUE))
(SUBPARTITION M210001_N VALUES (NULL), SUBPARTITION M210001_NN VALUES (DEFAULT))
;Hi Hoek,
the DB version is 10.2 (italian version, then SET is correct).
...there's something strange: now I can INSERT rows but I can't update them!
I'm using this command string:
UPDATE TW_E_CUSTOMER_UNIFIED SET END_VALIDITY_DATE = TO_DATE('09-SET-09', 'DD-MON-RR') WHERE
id_customer_unified = '123' and start_validity_date = TO_DATE('09-SET-09', 'DD-MON-RR');
And this is the error:
Error SQL: ORA-14402: updating partition key column would cause a partition change
14402. 00000 - "updating partition key column would cause a partition change"
*Cause: An UPDATE statement attempted to change the value of a partition
key column causing migration of the row to another partition
*Action: Do not attempt to update a partition key column or make sure that
the new partition key is within the range containing the old
partition key.
I think that is impossible to use a PARTITION/SUBPARTITION like that: in fact the update of "END_VALIDITY_DATE" cause a partition change.
Do u agree or it's possible an update on a field that implies a partition change?
Regards Steve -
Issue with DWH DB tables creation
Hi,
While generating Datawarehouse tables (sec 4.10.1 How to Create Data Warehouse Tables), i have landed up with error that states "Creating Datawarehouse tables Failure'
But when i checked in the log file 'generate_ctl.log', it have the below message:
+"Schema will be created from the following containers:+
Oracle 11.5.10
Oracle R12
Universal
Conflict(s) between containers:
Table Name : W_BOM_ITEM_FS
Column Name: INTEGRATION_ID.
+The column properties that are different :[keyTypeCode]+
Success! "
When i checked in the DWH Database, i could found DWH tables but not sure whether all tables were created?
Can anyone tell me whether my DWH tables are all created? How many tables would be created for the above EBS containers?
Also, should i need to drop any of EBS container to create DWH tables successfully?
The Installation guide states when DWH tables creation fails then 'createtables.log' won't be created. But, in my case, this log file got created!
Edited by: userOO7 on Nov 19, 2008 2:41 PMI saw the same message. I also noticed I am unable to load any BOM Items into that fact table. It looks like the BOM_EXPLODER package call is not keeping any rows in BOM_EXPLOSION_TEMP, so no rows are loaded into that fact table. Someone needs to log an SR for this.
*****START LOAD SESSION*****
Load Start Time: Wed Nov 19 17:13:42 2008
Target tables:
W_BOM_ITEM_FS
READER_2_1_1> BLKR_16019 Read [0] rows, read [0] error rows for source table [BOM_EXPLOSION_TEMP] instance name [mplt_BC_ORA_BOMItemFact.BOM_EXPLOSION_TEMP]
READER_2_1_1> BLKR_16008 Reader run completed.
TRANSF_2_1_1> DBG_21216 Finished transformations for Source Qualifier [mplt_BC_ORA_BOMItemFact.SQ_BOM_EXPLOSION_TEMP]. Total errors [0]
WRITER_2_*_1> WRT_8167 Start loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
WRITER_2_*_1> WRT_8168 End loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
WRITER_2_*_1> WRT_8035 Load complete time: Wed Nov 19 17:13:42 2008
LOAD SUMMARY
============
WRT_8036 Target: W_BOM_ITEM_FS (Instance Name: [W_BOM_ITEM_FS])
WRT_8044 No data loaded for this target
WRITER_2__1> WRT_8043 ****END LOAD SESSION*****
WRITER_2_*_1> WRT_8006 Writer run completed.
I now see it is covered in the release notes:
http://download.oracle.com/docs/cd/E12127_01/doc/bia.795/e12087/chapter.htm#CHDFJHHB
1.3.31 No Data Is Loaded Into W_BOM_ITEM_F And W_BOM_ITEM_FS
The mapping SDE_ORA_BOMItemFact needs to call a Stored Procedure (SP) in the Oracle EBS instance, which inserts rows into a global temporary table (duration SYS$SESSION, that is, the data will be lost if the session is closed). This Stored Procedure does not have an explicit commit. The Stored Procedure then needs to read the rows in the temporary table into the warehouse.
In order for the mapping to work, Informatica needs to share the same connection for the SP and the SQL qualifier during ETL.This feature was available in the Informatica 7.X release, but it is not available in the Informatica release 8.1.1 (SP4). As a result, W_BOM_ITEM_FS and W_BOM_ITEM_F are not loaded properly.
Workaround
For all Oracle EBS customers:
Open package body bompexpl.
Look for text "END exploder_userexit;", scroll a few lines above, and add a "commit;" command before "EXCEPTION".
Save and compile the package. -
Exporting index with partition export using expdp
Hi,
I am using EXPDP export In 11.1.0.7, is there a way i can export indexes along with the partition export of a table. With full table export, indexes are exported, but i don't see this with only a single partition export because i don't see indexes creation in following scenario.
1. Export a partition from the production table.
2. In development environment, drop all indexes on this table.
3. Drop same partition in development before importing it afresh from the export file created at first step.
4. Import the partition exported n first step (this does not automatically create indexes dropped in step 2.
5. Manually recrease indexes again.
Thankswhen you do a table mode export, indexes are included, unless you said exclude=index. Please list your expdp and impdp command and the log files to show that indexes are not included.
Thanks
Dean -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Table creation - order of events
I am trying to get some help on the order I should be carrying out table creation tasks.
Say I create a simple table:
create table title (
title_id number(2) not null,
title varchar2(10) not null,
effective_from date not null,
effective_to date not null,
constraint pk_title primary key (title_id)
I believe I should populate the data, then create my index:
create unique index title_title_id_idx on title (title_id asc)
But I have read that Oracle will automatically create an index for my primary key if I do not do so myself.
At what point does Oracle create the index on my behalf and how do I stop it?
Should I only apply the primary key constraint after the data has been loaded as well?
Even then, if I add the primary key constraint will Oracle not immediately create an index for me when I am about to create a specific one matching my naming conventions?yeah but just handle it the way you would handle any other constraint violation - with the EXCEPTIONS INTO clause...
SQL> select index_name, uniqueness from user_indexes
2 where table_name = 'APC'
3 /
no rows selected
SQL> insert into apc values (1)
2 /
1 row created.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
Table altered.
SQL> insert into apc values (2)
2 /
insert into apc values (2)
ERROR at line 1:
ORA-00001: unique constraint (APC.APC_PK) violated
SQL> alter table apc drop constraint apc_pk
2 /
Table altered.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> @%ORACLE_HOME%/rdbms/admin/utlexcpt.sql
Table created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 exceptions into EXCEPTIONS
4 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> select * from apc where rowid in ( select row_id from exceptions)
2 /
COL1
2
2
SQL> All this is in the documentation. Find out more.
Cheers, APC -
How to create monthly table creation?
Hi Mates,
Unable to create table by month in analytic database but load the data to the previous table continuous as attached screenshot, Schema user has the creation privilege. We are using Webcenter interaction 10gR4.
How to create monthly table creation please?
Thanks,
KatherineHi Trevor,
Thanks for your help. We were able to create table and load data till Apr as attached.
However the analytic user privilege has been modified on Apr due to server operation.
Since then, there was a message saying there is no permission to create tables in the analytic log,
analytic user privilege has been granted after checked this message, As I suspected, the issue occurred after modifying analytic user privilege.
Currently, analytic users are granted with all privilege.
Any idea please?
Thanks,
Kathy
Maybe you are looking for
-
How to Post a a debit and credit in background? Reclassificaton
Hi Gurus, I have a question regarding how to post a debit and credit in background? In fact I create an internal table and displayed in ALV grid where there are the amounts in local and transaction currency which are atypical. And after I have to pos
-
How do I change a document that I have saved as a PDF file in Documents........I can open it but can't do anything with it.
-
How to avoid color boost when exporting?
I noticed that the colors of any movie exported from iMovie 08 appear boosted i.e. stronger. No matter what codec I choose under "Share > Export using QuickTime" (Animation, AIC, Photo-JPEG) the colors are never like in the original. Can I export a m
-
Hi all, I need to create a report in bex wherein i need to get the data for 6month i.e Oct,Nov,Dec,Jan,Feb Mar.The value are fetched based on the delivery date. I need to have the column name as the above 6months at the end of each month the next cor
-
Hi, Is there a way to insert or copy a pdf into an excel document. When i have a pdf open i click on edit and then click on copy file to clipboard I then go into excel and click on paste and it pastes (i think as an image) the first page of the pdf w