How to decrease the data in physical layer
Hi to all,
physical layer have some data. but i want to decrease the data in the physical layer with out delete. how u can do. how many ways are there to do this process?
Thanks.........
Prasad
He posted the same question decrease the data in physical layer , but didn't reply anymore.
Maybe we can get to know what he actually wants in this thread...
Cheers,
C.
Similar Messages
-
Error occurred when view the data from physical layer
Hi ,
I have created system DSN like "demo1" for excel file. After that i import the excel file into administration tool.
after that i want to view the data from physical layer. At that time ,I am getting following error .
Please resolve this issue.
Thanks in advance,hi ashok,
u can push data from psa to ods, for this goto the psa in rsa1>psa>goto that request>rightclick>select " schedule update Immediately ", then data will moved from psa to ods.
or
In ods > delete the failed request>goto the processing tab-->select 3rd option " psa and then subsequentially to data targets ", --> schedule the infopackage.
bye
sunil -
How to delete the data from partition table
Hi all,
Am very new to partition concepts in oracle..
here my question is how to delete the data from partition table.
is the below query will work ?
delete from table1 partition (P_2008_1212)
we have define range partition ...
or help me how to delete the data from partition table.
Thanks
Sree874823 wrote:
delete from table1 partition (P_2008_1212)This approach is wrong - as Andre pointed, this is not how partition tables should be used.
Oracle supports different structures for data and indexes. A table can be a hash table or index organised table. It can have B+tree index. It can have bitmap indexes. It can be partitioned. Etc.
How the table implements its structure is a physical design consideration.
Application code should only deal with the logical data structure. How that data structure is physically implemented has no bearing on application. Does your application need to know what the indexes are and the names of the indexes,in order to use a table? Obviously not. So why then does your application need to know that the table is partitioned?
When your application code starts referring directly to physical partitions, it needs to know HOW the table is partitioned. It needs to know WHAT partitions to use. It needs to know the names of the partitions. Etc.
And why? All this means is increased complexity in application code as this code now needs to know and understand the physical data structure. This app code is now more complex, has more moving parts, will have more bugs, and will be more complex to maintain.
Oracle can take an app SQL and it can determine (based on the predicates of the SQL), which partitions to use and not use for executing that SQL. All done totally transparently. The app does not need to know that the table is even partitioned.
This is a crucial concept to understand and get right. -
How to Split the Date Column in OBIEE
Hi,
We have date column name :To_Date and format is MM/DD/YY HH:MM:SS .
How do split the date into YEARS,MONTH,DAY as new columns.
kindly help on this.
Regards.,
CHR
Edited by: 867932 on Nov 23, 2011 10:18 PMHi User,
All 3 functions can be written in rpd too.In BMM layer, duplicate the date column ->Goto Column Mapping tab-> Expression builder ->Select Functions-> Calendar Date/Time Functions-> Select DayofMOnth function. Your logical column's formula will look like,
DayofMonth(YourDateColumn)
Rgds,
Dpka -
Cannot view data on physical layer on Oracle BI administration tool
Hi everyone
I have a schema1 and table1, table2, table3.
I have connect in odbc using the schema username and password and it was successful.
I can view the data using toad and sql oracle developer with schema username and password.
I also import the tables in Oracle BI administration tool with my schema username and password.
But my problem is i cannot view the data on the table in physical layer using my schema username and password.
Please help,
Thanks in advance
Edited by: hanson on Sep 16, 2009 7:15 AMHi,
I'm confused about the last bit, you did a database import to get the tables into the RPD, which worked, but now they are not showing in the phsyical layer of the rpd? They should be there under a database with the name of the ODBC/OCI that you used when you imported them.
If they are not there then this would make me think that the import of the tables has not worked, try importing just one and see what happens.
Regards,
Matt -
How to decrease the storage in macbook air
how to decrease the storage in macbook air ?
you mean decrease all the files filling your Air?
See here for answer about the OTHER which is taking up space:
http://pondini.org/TM/30.html
and here:
http://pondini.org/OSX/DiskSpace.html
See Kappys excellent note on the rest of “other” files taking up your space:
What is "Other" and What Can I Do About It?
In the case of a Macbook Air or Macbook Pro Retina with ‘limited’ storage on the SSD, this distinction becomes more important in that in an ever rapidly increasing file-size world, you keep vital large media files, pics, video, PDF collections, music off your SSD and archived on external storage, for sake of the necessary room for your system to have free space to operate, store future applications and general workspace. You should never be put in the position of considering “deleting things” on your macbook SSD in order to ‘make space’.
Professionals who create and import very large amounts of data have almost no change in the available space on their computers internal HD because they are constantly archiving data to arrays of external or networked HD.
Or in the case of the consumer this means you keep folders for large imported or created data and you ritually offload and archive this data for safekeeping, not only to safeguard the data in case your macbook has a HD crash, or gets stolen, but importantly in keeping the ‘breathing room’ open for your computer to operate, expand, create files, add applications, for your APPS to create temp files, and for general operation. -
How to decrease the OLAP/DB and Frontend time of query
Hi,
While I am excuiting the query, it takes time to dispaly the query output. Reason behind is it have the high OLAP/DB and Front end time.
Can anyone help me how to decrease the OLAP/DB and Front end time.
Regards,Probably Existing Aggregates are not useful .. you can edit them based on below factors
You can also check ST03N - > BI Workload - > select Weekly/Monthly - >Choose All data tab - >dbl click on Infocube - > choose query - > check the DB time % > 30% and Aggregation ratio is > 10 % then create aggregates.
Now goto RSRT choose query - > Execute debug mode - > check 'Display Aggr found' and choose 'Display Stats' -> now you can check records selected Vs records transferred in % .. it should be > 10 % .. That means the query is fetching records from Aggrs which will also improve query performance!
Also check RSRT - Execute in Debug and choose display aggregates found.. If it found aggregates then the query is using/fetching data from aggr's
Edited by: Srinivas on Sep 14, 2010 3:40 PM -
How to check the data of an archived table.
I have archived a table created by me. I have executed the write program for the archiving object in SARA. Now how can check the data of my archived table.
Hello Vinod,
One thing to check in the customizing settings is your "Place File in Storage System" option. If you have selected the option to Store before deleting, the archive file will not be available for selection within the delete job until the store job has completed successfully.
As for where your archive file will be stored - there are a number of things to check. The archive write job will place the archive file in whatever filesystem you have set up within the /nFILE transaction. There is a logical file path (for example ARCHIVE_GLOBAL_PATH)where you "assign" the physical path (for example UNIX: /sapmnt/<SYSID>/archivefiles). The logical path is associated with a logical file name (for example ARCHIVE_DATA_FILE_WITH_ARCHIVE_LINK). This is the file name that is used within the customizing settings of the archive object.
Then, the file will be stored using the content repository you defined within the customizing settings as well. Depending on what you are using to store your files (IXOS, IBM Commonstore, SAP Content Server, that is where the file will be stored.
Hope this helps.
Regards,
Karin Tillotson -
How to sync the data between the two iSCSI target server
Hi experts:
I have double HP dl380g8 server, i plan to install the server 2012r2 iSCSI target as storage, i know the iSCSI storage can setup as high ability too, but after some research i doesn't find out how to sync the data between the two iSCSI target server, can
any body help me?
ThanksHi experts:
I have double HP dl380g8 server, i plan to install the server 2012r2 iSCSI target as storage, i know the iSCSI storage can setup as high ability too, but after some research i doesn't find out how to sync the data between the two iSCSI target server, can
any body help me?
Thanks
There are basically three ways to go:
1) Get compatible software. Microsoft iSCSI target cannot do what you want out-of-box but good news third-party software (there are even free versions with a set of limitations) can do what you want. See:
StarWind Virtual SAN [VSAN]
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
DataCore SANxxx
http://datacore.com/products/SANsymphony-V.aspx
SteelEye DataKeeper
http://us.sios.com/what-we-do/windows/
All of them do basically the same: mirror set of LUs between Windows hosts to emulate a high performance and fault tolerant virtual SAN. All of them do this in active-active mode (all nodes handle I/O) and at least StarWind and DataCore have sophisticated
distributed cache implementations (RAM and flash).
2) Get incompatible software (MSFT iSCSI target) and run it in generic Windows cluster. That would require you to have CSV so physical shared storage (FC or SAS, iSCSI obviously has zero sense as you can feed THAT iSCSI target directly to your block storage
consumers). This is doable and is supported by MSFS but has numerous drawbacks. First of all it's SLOW as a) MSFT target does no caching and even does not use file system cache (at all, VHDX it uses as a containers are opened and I/O-ed in a "pass-thru" mode)
b) it's only active-passive (one node will handle I/O @ a time with other one just doing nothing in standby mode) and c) long I/O route (iSCSI initiator -> MSFT iSCSI target -> clustered block back end). For reference see:
Configuring iSCSI Storage for High Availability
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
MSFT iSCSI Target Cluster
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
3) Re-think what you do. Except iSCSI target from MSFT you can use newer technologies like SoFS (obviously faster but requires a set of a dedicated servers) or just a shared VHDX if you have a fault tolerant SAS or FC back end and want to spawn a guest VM
cluster. See:
Scale-Out File Servers
http://technet.microsoft.com/en-us/library/hh831349.aspx
Deploy a Guest Cluster Using a Shared Virtual Hard Disk
http://technet.microsoft.com/en-us/library/dn265980.aspx
With Windows Server 2012 R2 release virtual FC and clustered MSFT target are both really deprecated features as shared VHDX is both faster and easier to setup and use if you have FC or SAS block back end and need to have guest VM cluster.
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Server Admin - How current is the data?
Good afternoon.
When I view the DHCP data in Server Admin, how current is the data? When I close down workstations and logon others, the data displayed, even after multiple refreshes, does not change. A quit & relaunch doesn't update the data either.
Can I rely on Server Admin to provide a current, accurate view?
Brian Bowell
Berkley Normal Middle School
Hamilton New Zealand.I'm not sure of the frequency at which Server Admin re-polls the data, but nothing is cached so at the very least each time you connect to a server you're seeing data that's valid up until then.
However, in the case of DHCP you need to consider lease duration.
If your lease is set to something like 3 days then any given client will remain in the DHCP leases list for that long.
In other words, assuming you have one DHCP client on your network and that machine boots on Monday Afternoon, then that machine will appear in the DHCP leases list until at least some time on Thursday afternoon. The number of leases will not decrease when the machine shuts down on Tuesday, nor increase if that machine comes back online before the original lease expiration (it will just be re-issued the same DHCP lease/IP address/etc.). Therefore your lease counter will remain at 1 even when the machine is offline.
Is it possible that accounts for what you're experiencing? The number of leases does not reflect the number of active DHCP clients on your network, just the number that have been active within the last (lease duration) hours. -
How to import the data again to owb
Hi Experts,
I took backup for owb data by going to Design--export---metadata wizard---export-----all the data with all dependency has been taken,
my question is how to restore the data again to the owb tool.
Please say me a good solution for this ASAP.
Thanks,
senIf you want to Import the Exported OWB project just use Design--import choose the file location and while importing choose options replace metadata, match by physical names
Suresh -
How to pass the data from a input table to RFC data service?
Hi,
I am doing a prototype with VC, I'm wondering how VC pass the data from a table view to a backend data service? For example, I have one RFC in the backend system with a tabel type importing parameter, now I want to pass all the data from an input table view to the RFC, I guess it's possible but I don't know how to do it.
I try to create some events between the input table and data service, but seems there is no a system event can export the whole table to the backend data service.
Thanks for your answer.Thanks for your answer, I tried the solution 2, I create "Submit" button, and ser the mapping scope to be "All data rows", it only works when I select at least one row, otherwise the data would not be passed.
Another question is I have serveral imported table parameter, for each table I have one "submit" event, I want these tables to be submitted at the same time, but if I click the submit button in one table toolbar, I can only submit the table data which has a submit button clicked, for other tables, the data is not passed, how can I achieve it?
Thanks. -
How to select the data from a Maintainance View into an internal table
Hi All,
Can anybody tell me how to select the data from a Maintainance View into an internal table.
Thanks,
srinivas.HI,
You can not retrieve data from A mentenance view.
For detail check this link,
http://help.sap.com/saphelp_nw2004s/helpdata/en/cf/21ed2d446011d189700000e8322d00/content.htm
Regards,
Anirban -
How to download the data which is in the table?
how to download the data which is in the table?
every field data in the table i want to download and once the download is finished then i have to set the flag as 'download is finished ' as one field in table?
can any one help me in this.
Phani.
Edited by: phani kumarDurusoju on Jan 9, 2008 6:36 AMOne way is to Download the data Directly from the database table using the path SE11->Give table name ->Execute -> system ->List ->Save ->Local File
There u can downlaad the data .
The ither way is to use the code
The Following Code will be helpfull to You
Data :ITAB TYPE TRUXS_T_TEXT_DATA,
FILE TYPE STRING.
C_ASC TYPE CHAR10 VALUE 'ASC',
DATA: L_STATUS TYPE C,
L_MESSAGE TYPE PMST_RAW_MESSAGE,
L_SUBJECT TYPE SO_OBJ_DES.
DATA: L_FILELENGTH TYPE I.
PERFORM download_to_pc
TABLES
itab
USING
filename
c_asc
c_x
CHANGING
l_status
l_message
l_filelength.
FORM DOWNLOAD_TO_PC TABLES DOWNLOADTAB
USING FILENAME
FILETYPE TYPE CHAR10
DELIMITED
CHANGING STATUS
MESSAGE TYPE PMST_RAW_MESSAGE
FILELENGTH TYPE I.
DATA: L_FILE TYPE STRING,
L_SEP.
L_FILE = FILENAME.
IF NOT DELIMITED IS INITIAL.
L_SEP = 'X'.
ENDIF.
STATUS = 'S'.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
FILENAME = L_FILE
FILETYPE = FILETYPE
WRITE_FIELD_SEPARATOR = L_SEP
IMPORTING
FILELENGTH = FILELENGTH
TABLES
DATA_TAB = DOWNLOADTAB
EXCEPTIONS
FILE_WRITE_ERROR = 1
NO_BATCH = 2
GUI_REFUSE_FILETRANSFER = 3
INVALID_TYPE = 4
NO_AUTHORITY = 5
UNKNOWN_ERROR = 6
HEADER_NOT_ALLOWED = 7
SEPARATOR_NOT_ALLOWED = 8
FILESIZE_NOT_ALLOWED = 9
HEADER_TOO_LONG = 10
DP_ERROR_CREATE = 11
DP_ERROR_SEND = 12
DP_ERROR_WRITE = 13
UNKNOWN_DP_ERROR = 14
ACCESS_DENIED = 15
DP_OUT_OF_MEMORY = 16
DISK_FULL = 17
DP_TIMEOUT = 18
FILE_NOT_FOUND = 19
DATAPROVIDER_EXCEPTION = 20
CONTROL_FLUSH_ERROR = 21
OTHERS = 22.
IF SY-SUBRC <> 0.
STATUS = 'E'.
CASE SY-SUBRC.
WHEN 1.
MESSAGE = 'gui_download::file write error'.
WHEN 2.
MESSAGE = 'gui_download::no batch'.
WHEN 3.
MESSAGE = 'gui_download::gui refuse file transfer'.
WHEN 4.
MESSAGE = 'gui_download::invalid type'.
WHEN 5.
MESSAGE = 'gui_download::no authority'.
WHEN 6.
MESSAGE = 'gui_download::unknown error'.
WHEN 7.
MESSAGE = 'gui_download::header not allowed'.
WHEN 8.
MESSAGE = 'gui_download::separator not allowed'.
WHEN 9.
MESSAGE = 'gui_download::filesize not allowed'.
WHEN 10.
MESSAGE = 'gui_download::header too long'.
WHEN 11.
MESSAGE = 'gui_download::dp error create'.
WHEN 12.
MESSAGE = 'gui_download::dp error send'.
WHEN 13.
MESSAGE = 'gui_download::dp error send'.
WHEN 14.
MESSAGE = 'gui_download::ubknown dp error'.
WHEN 15.
MESSAGE = 'gui_download::access denied'.
WHEN 16.
MESSAGE = 'gui_download::dp out of memory'.
WHEN 17.
MESSAGE = 'gui_download::disk full'.
WHEN 18.
MESSAGE = 'gui_download::dp timeout'.
WHEN 19.
MESSAGE = 'gui_download::file not found'.
WHEN 20.
MESSAGE = 'gui_download::dataprovider exception'.
WHEN 21.
MESSAGE = 'gui_download::control flush error'.
WHEN 22.
MESSAGE = 'gui_download::Error'.
ENDCASE.
ENDIF.
ENDFORM. "download_to_pc
At The End Reward points.
Please it's Required.
Thanks ,
Rahul -
How to add the date field in the dso and info cube
Hi all.
I am new to bi 7. in the earlier version v hav to button to add the date field. but in the bi 7 der is no option so can any body tell me how to add the date field in the data targets
Thanks & Regard
KKmy prob is solved
KK
Maybe you are looking for
-
hi experts, How can we edit feild catalogue in an alv and after editing it the output should display in an new output list or in a new report after we press a button thanks harish
-
Error - Addition of Batch wise GRPO thru DI API
Dear All, I am facing the error "Can not release the item with selection of Batch/Serial" while adding the GRPO thru DI API. When I add only 1 line in GRPO with Batch Details thru coding,GRPO added successfully. But when there are more than 1 line th
-
Changind default plant in third party sales order
Hi Gurus, Kindly help me understand the reason. I have created a 3rd party sales order for service material.It is determining plant SCB1 when there are three plants active in the sales organisation SC00. The shipping point for SCB1 is SC01 (as it is
-
Excel import on Oracle Linux - How to create an ODBC Connection
Hi, We have Oracle BI EE on Oracle Linux. We need to create an ODBC DNS and import tables to Admin tool. How would you create a ODBC connection inside Linux to Microsoft excel file using unixodbc. What drivers we need. Please let us know. Thanks! Nil
-
Graphic problem in LR5 right pane
When selecting images in Library module for a slideshow, the right pane switched to a muddy yellow color. After minimizing the LR5 window, the muddy yellow box remained on the desktop. Could this be a LR5 glitch/bug?