Monday, March 24, 2014

Install Oracle REST Data Services for Oracle Apex in Apache Tomcat 7

1) Download Oracle REST data services from http://www.oracle.com/technetwork/developer-tools/rest-data-services/downloads/index.html

2) Unzip the content of the ZIP directory
3) Rename the ords.war to apex.war if you want the URL in the format of http://hostname/apex
mv ords.war apex.war

4) Before you configure the WAR file you will need to specify where the config files should be kept. Create a directory where you want the config files to be kept. Suppose its in /home/oracle/apex_config

5) Run the following:
java -jar apex.war configdir "/home/oracle/apex_config"

6) Next run the setup, it will write the config files to directory specified in parameter in step 5 above:
java -jar apex.war setup

7) Copy this apex.war file to <<tomcat home>> /webapps directory

8) Copy the Apex Images to <<tomcat home>>/webapps/images directory and rename this directory to i so the path to images directory is 
<<tomcat home>>/webapps/i

9) Startup Tomcat

Wednesday, December 4, 2013

Resolving weblogic.store.PersistentStoreException:...."_WLS_AdminServer" cannot open file _WLS_ADMINSERVER000000.DAT. error


In Weblogic 10.3.5, if you get the following error during startup:



> <Store> 045> <The persistent store "_WLS_AdminServer" could not be deployed: weblogic.store.PersistentStoreException: [Store:280105]The persistent file store "_WLS_AdminServer" cannot open file _WLS_ADMINSERVER000000.DAT.

You can try fix it by navigating to:
 
/u01/weblogic/Oracle/Middleware/user_projects/domains/<<Domain_name>>/servers/AdminServer/data/store/default/

directory and delete the _WLS_ADMINSERVER000000.DAT fIle.

This action is detailed out in Oracle Support doc ID:  957377.1

 After deleting the file, try restart the server again. 


 

Wednesday, November 20, 2013

Oracle Database: unexpire schema or fix ORA-28001 without changing the password

In Oracle Database, to unexpire schemas, you will need to issue the command as SYS/SYSTEM:

ALTER USER <USERNAME> IDENTIFIED BY <PASSWORD>

For this to happen you will need to know the password of the user you are trying to "unexpire". There are situations however where we do not know the password of the schema we are trying to unexpire.


A work around is retrieving the hashed password of the schema and issue the ALTER USER statement in slightly different manner:

Using the SCOTT/tiger example below: 

STEP 1: Retrieve the hashed password of the expired schema

Connect as SYSDBA and run the query: 

SQL> select password from sys.user$ where name = 'SCOTT';

PASSWORD
------------------------------
F894844C34402B67


STEP 2: Run the ALTER USER command as below:

SQL> ALTER USER SCOTT IDENTIFIED BY VALUES 'F894844C34402B67';

If account is locked you may need to run the following as well:

ALTER USER SCOTT ACCOUNT UNLOCK;

Saturday, August 10, 2013

Oracle Apex Tabular Form: Delete unsaved rows without submitting

A common shortfall in Tabular Form functionality in APEX (amongst many) is that when you add a couple of rows and you wish to delete one without saving the form, you will end up losing the whole lot of rows entered.

This piece of code will help you avoid the problem. Put this in page header and in the DELETE button's attribute, set type as URL and javascript:delete_rows(); as target.

The way it works, if the checked row wasn't saved and it will be removed without page submission, otherwise the page will be submitted to remove it via normal mechanism.

<script language="JavaScript" type="text/javascript">
    function delete_rows()
    {
        var need_submit = 0;
        $("[name='f01']").each(function(){

        if($(this).attr("checked"))
        {
            if(this.value == 0) // new row, not saved. Checkbox value is 0
            {
                $(this).closest("tr").remove();
            }
            else    // row was saved before, need to be removed via page submission.
                need_submit = need_submit + 1;
           
        }

        });
       
        if(need_submit > 0)    // require submission?
            apex.submit('MULTI_ROW_DELETE');
           
        addTotal();   

    }

</script>

Saturday, November 10, 2012

Oracle Apex - Dynamically Color Text field

It is often useful to highlight certain fields when certain (usually incorrect) values are entered as a form of on the fly validation. 

Suppose you had a fairly large form with many fields many of which only accept numbers. It would be much more user friendly if you could alert end-users to incorrect values as they enter it, rather than let them submit and have a big list of invalid entries to sift through.



1) Put the following code in page header.

<script language="JavaScript" type="text/javascript">

function setCol(pThis)
{
var vv = $v(pThis);
var cls = '#'+pThis;
if(isNaN(vv))
{
$(cls).css("background-color","red");
}
else
{
$(cls).css("background-color","white");
}
}


</script>

2) Create Dynamic Action with following configuration:
  1. Type Advanced (for pre-4.2)
  2. Event: change
  3. Selection Type: item
  4. Item(s): P1_ENAME, P1_MOB (comma seperate the list of items)
  5. True Action: 
    1. Action: Execute Javascript Code
    2. Code: setCol(this.triggeringElement.id);
  6. Affected Items:
    1. Pick the items in #4 above.

What this does is basically highlight textfield when non-numeric characters are entered (isNAN). 

Here is how it should look like if implemented properly. You can take this and do more fancy stuff like validating against regex. 

http://apex.oracle.com/pls/apex/f?p=21796:1

Saturday, October 27, 2012

Oracle - Move schema from one tablespace to another


Starting from 11G, Oracle offers a much simpler way to migrate schema for tablespaces with Data Pump utility. Previously you would have to export the schema and then drop the user re-import the schema and then rebuild all the indexes.

With Data Pump the process of switching tablespace is much simpler.

STEP 1: Export the schema using datapump
expdp system/system_password SCHEMAS=MY_SCHEMA DIRECTORY=DATA_PUMP_DIR DUMPFILE=MY_SCHEMA.dmp LOGFILE=expdp.log

Review the log to ensure the export is done properly.

STEP 2: Drop the user from database
 DROP USER MY_SCHEMA CASCADE;

STEP 3: Import the schema with REMAP_TABLESPACE
impdp system/my_schema SCHEMAS=MY_SCHEMA REMAP_TABLESPACE=SYSTEM:MY_SCHEMA_TBSPACE DIRECTORY=DATA_PUMP_DIR DUMPFILE=MY_SCHEMA.dmp LOGFILE=impdp.log


STEP 4 : Verify Tablespace Change & Validity of Objects

Check for default tablespace by running:
select username, default_tablespace from dba_users;

Also check if all objects are valid, if not compile them
SQL> select owner, status from dba_objects where upper(owner)='MY_SCHEMA';

Check the indexes:
select index_name, status from all_indexes where status ='UNUSABLE';

For more information, check out the white paper on Data Pump:

Quick Start Guide:
http://www.oracle.com/technetwork/issue-archive/2009/09-jul/datapump11g2009-quickstart-128718.pdf

Documentation:
http://docs.oracle.com/cd/B12037_01/server.101/b10825/dp_import.htm

Sunday, September 23, 2012

Creating database backups using exp & and crontab

The following is a neat little script that will allow you to create logical backups (and remove old ones) of your schema in the same file system of the database.


ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export ORACLE_HOME
ORACLE_SID=XE
export ORACLE_SID
PATH=$ORACLE_HOME/bin:$PATH
export PATH

BKUP_DEST=/home/rode/cbackups
find $BKUP_DEST -name 'backup*.dmp' -mtime +10 -exec rm {} \;



cd /home/rode/cbackups && /u01/app/oracle/product/11.2.0/xe/bin/exp schema/password FILE=backup_`date +'%Y%m%d-%H%M'`.dmp


You will need to change the bits highlighted in orange with paths of your system. The script starts by exporting necessary paths, then removes any old backups greater than 10 days before using the exp utility to create the backup.

To set it up, follow these instructions: 
  1. Create a directory where the backups will be stored. In my case it is: /home/ai/cbackups
  2. Open vi and save the scripts (after replacing bits in orange with your own setup) in your home directory as say: /home/ai/backup_script.sh
  3. Next, run crontab -e to set up a new cron job.
  4. Add an entry like:
    10 0 * * * /home/ai/backup_script.sh

Above the the script will run every 10 minutes past midnight. Check out this link on how to schedule crontab: http://www.adminschoice.com/crontab-quick-reference


You may opt (and better idea) to use data pump instead as it supercedes the export/import utilities and if exporting involves lots of data. Check this document on how to use data pump:
http://www.oracle.com/technetwork/issue-archive/2009/09-jul/datapump11g2009-quickstart-128718.pdf