Master Your Salesforce Technical Interview: 35+ Salesforce Developer Scenario-Based Interview Questions

Preparing for a technical Salesforce interview can be daunting. It’s not just about knowing definitions, it’s about applying your knowledge to solve real-world problems. Hiring managers at top companies like Salesforce, Infosys, Deloitte, and TCS want to see how you think, strategize, and write clean, scalable code.

That’s why we’ve compiled this list of over 35 scenario-based interview questions. These are practical challenges you’ll face on the job. We’ve broken them down into categories to help you focus your preparation. Let’s dive in!

Apex & Triggers

This section focuses on your core server-side logic, governor limits, and best practices.

1. You need to stop users from editing a field after a record reaches a certain stage. How would you implement it?

This is a classic control question. There are a few ways to achieve this, each with its own pros and cons and depends on your business requirement .

Best Approach (Declarative): Using Validation Rule

This is the simplest and most efficient method. It runs before a record is saved and can prevent the change entirely.

  • Formula: AND(ISPICKVAL(StageName, "Closed Won"), ISCHANGED(Amount))
  • Why it’s best: It’s maintainable without code, fires on the server side, and is highly performant.

Approach 2 : Programmatic: “before update” Using Apex Trigger

If the logic is too complex for a validation rule (e.g., requires querying other objects), use a trigger.

  • Logic: In the trigger, check the old and new values. If the condition is met, use the .addError() method on the record or field to prevent the save.
  • Example:
trigger OpportunityTrigger on Opportunity (before update) {

    for (Opportunity opp : Trigger.new) {

        // Check if the record in the database is already 'Closed Won'
        if (Trigger.oldMap.get(opp.Id).StageName == 'Closed Won' && opp.Amount != Trigger.oldMap.get(opp.Id).Amount) {
            opp.Amount.addError('Cannot change Amount on a Closed Won Opportunity.');
        }
    }
}

Approach 3: UI-Level Approach

Make the field read-only on the Page Layout based on a record type or profile. This is the weakest approach as it doesn’t prevent updates via API or other automations.

2. A trigger is failing due to recursive updates. How do you handle it?

Recursion happens when a trigger update causes another update, which in turn fires the same trigger again, leading to an infinite loop and hitting governor limits.

Best Practice: Use a Static Boolean Flag

This is the standard industry approach. A static variable’s state is preserved within a single transaction.

// Create a Trigger Handler class
public class OpportunityTriggerHandler {
    private static boolean hasRun = false;

    public static void onAfterUpdate(List<Opportunity> newList) {
        if (hasRun) {
            return; // Exit if the trigger has already run in this transaction
        }
        hasRun = true; // Set the flag to true

        // ... your logic here that might cause a recursive update ...
        List<Opportunity> updates = new List<Opportunity>();
        // DML operation here
        // update updates;
    }
}

// Your Trigger
trigger OpportunityTrigger on Opportunity (after update) {
    OpportunityTriggerHandler.onAfterUpdate(Trigger.new);
}

3. There is a CPU timeout on a complex trigger. How do you refactor?

CPU timeout errors (10,000 ms limit) occur when your code is too computationally expensive. Refactoring focuses on optimization and moving logic to be asynchronous.

  1. Bulkify Your Code: Ensure you are not running SOQL or DML inside for loops. Use Maps to optimize data access and avoid nested loops where possible.
  2. Move Logic to a Handler Class: This is a best practice for maintainability and testing, but it also helps organize your optimization efforts.
  3. Optimize SOQL: Use selective WHERE clauses, query only the fields you need, and avoid querying on formula fields if possible.
  4. Asynchronous Processing: If the logic isn’t required in real-time, move it to an asynchronous method (@future, Queueable Apex). You can check for a condition in the trigger and then launch the async process.
    • Example: Instead of calculating complex roll-ups in the trigger, pass the record IDs to a Queueable class to process them in a separate transaction.

4. You are asked to refactor a 500+ line trigger. What’s your approach?

A massive trigger is a maintenance nightmare. The key is to implement a Trigger Handler Framework.

  1. Create a Handler Class: All logic moves from the trigger to a dedicated Apex class.
  2. One Trigger Per Object: The trigger itself becomes very simple, essentially a dispatcher that calls methods in the handler based on the trigger context (Trigger.isBefore, Trigger.isUpdate, etc.).
  3. Separate Logic into Methods: Within the handler, break down the logic into private, single-responsibility methods (e.g., handleNewAccounts(), validateAccountUpdates()). This improves readability and testing.
  4. Use a Bypass Mechanism: Implement a way to disable the trigger logic during data migrations (e.g., using a static flag or a Custom Setting/Permission).

5. How do you handle callouts from a trigger?

You cannot perform a DML operation and then a synchronous callout in the same transaction. Furthermore, you cannot make a callout directly from a trigger.

Solution: Asynchronous Apex

The trigger should instantiate and enqueue an asynchronous Apex job (@future or Queueable Apex) and pass the necessary record IDs or data to it. The asynchronous method will then perform the callout.

// In Trigger
trigger AccountTrigger on Account (after insert) {
    Set<Id> accountIds = Trigger.newMap.keySet();
    // Call a future method to handle the callout
    AccountCalloutService.sendAccountInfo(accountIds);
}

// In a separate class
public class AccountCalloutService {
    @future(callout=true)
    public static void sendAccountInfo(Set<Id> accountIds) {
        // Perform SOQL to get necessary data
        // Build HTTP Request
        // Send request and handle response
    }
}

6. What is the difference between insert and Database.insert?

Both are used to insert records, but Database.insert provides more flexibility.

  • insert recordList;: If any record in the list fails, the entire transaction is rolled back, and a DmlException is thrown. It’s all or nothing.
  • Database.insert(recordList, allOrNone);: The allOrNone parameter (a boolean) is key.
    • If true, it behaves exactly like the insert statement.
    • If false, it allows for partial success. If some records fail, the successful ones are still committed to the database. The method returns a Database.SaveResult object that you can iterate over to identify which records failed and why.

7. When would you use a Custom Metadata Type instead of a Custom Setting?

Both store application configuration data, but Custom Metadata Types are the modern, recommended approach.

Use Custom Metadata Types When:

  • You need to deploy the configuration data (the records themselves) from a Sandbox to Production using Change Sets or other metadata deployment tools.
  • You need to reference the configuration in formulas, validation rules, or flows.
  • You want to build more sophisticated relationships between configuration records.

Use Custom Settings (List) When:

  • You need to frequently change the data in Production without a deployment.
  • The data is not something you would typically manage as part of the application’s metadata.

Tip: For new development, almost always default to Custom Metadata Types.

Flow & Automation

Flow is the future of Salesforce automation. Expect questions on its limits and capabilities.

8. A Flow is hitting governor limits when processing a large number of records. How would you fix it?

This is a common issue when a Flow is triggered by a mass update.

  1. Bulkify Your Flow: The most important rule is never put SOQL queries or DML operations inside a loop.
    • Instead: Loop through your records to collect IDs or data into a collection variable. Then, perform a single DML/SOQL operation outside the loop on that collection.
  2. Filter Early: Use specific criteria in your Get Records elements to fetch only the data you absolutely need.
  3. Offload to Apex: If the logic is inherently complex and can’t be bulkified within the Flow’s constraints, use an Invocable Apex Action. This allows you to write bulk-safe Apex code that the Flow can call.
  4. Launch a Batch Job: For truly massive, non-real-time processing, the Flow can be designed to simply launch a Batch Apex job.

If you want to know more about : How to Debug Salesforce Flow Effectively

9. You need to run Apex code at midnight every day. What’s your approach?

Best Solution: Schedulable Apex

Implement the Schedulable interface in an Apex class. This provides an execute method that contains the code to be run.

public class DailyMidnightJob implements Schedulable {
    public void execute(SchedulableContext sc) {
        // Your logic here
        // e.g., AccountCleanup.runBatch();
    }
}

You can then schedule this job to run daily using the “Schedule Apex” button in Setup or programmatically using the System.schedule method.

Alternative: Scheduled Flow

You can now create a record-triggered or schedule-triggered flow that can run at a specific time. For simple actions, this is a great no-code alternative. However, for complex logic or large data volumes, Schedulable Apex launching a Batch class is more robust.

10. How would you log errors globally across your Apex classes and Flows?

A robust error logging framework is crucial for production support.

Solution: Platform Events + Custom Object

  1. Create a Custom Object: Build a Log__c custom object with fields like ClassName__c, MethodName__c, ErrorMessage__c, StackTrace__c, Severity__c (Error, Warning, Info), and a Long Text Area for the Details__c.
  2. Create a Platform Event: Define a Error_Log__e Platform Event with fields matching the custom object.
  3. Create a Logging Utility Class: Write a central LogService.cls with a static method like logError(Exception e, String details). This method populates the fields of the Error_Log__e Platform Event and publishes it using EventBus.publish().
  4. Create a Trigger on the Platform Event: An “after insert” trigger on Error_Log__e subscribes to the events and inserts them as Log__c records.
  5. Use It Everywhere: In your Apex catch blocks and in Flow Fault Paths, call your LogService or publish the platform event.

Why this approach?

Publishing a platform event is a low-overhead operation that doesn’t consume DML limits and doesn’t fail if the final DML operation (inserting the Log__c record) fails. It decouples the act of logging from the act of saving the log.

Lightning Web Components (LWC)

LWC is the standard for UI development. These questions test your front-end skills.

11. Users report a slow Lightning record page. What steps do you take to diagnose and fix it?

  1. Analyze Page Performance: Use the built-in Lightning Page Analysis tool (accessible from the Lightning App Builder). It provides performance predictions and identifies components with high load times.
  2. Check the Component-Level:
    • Reduce Server Trips: Are components making too many separate Apex calls? Consolidate them. Use the @wire service’s built-in caching where possible.
    • Lazy Loading: Don’t load all data at once. Use tabs, accordions, or “show more” buttons to load data only when the user needs it. The if:true directive is your friend.
    • Component Granularity: Avoid creating one giant, monolithic LWC. Break it down into smaller, reusable components.
  3. Check the Apex Controller: Ensure the server-side Apex methods are optimized. Are the SOQL queries selective? Is the code bulkified?
  4. Use Caching: Use @wire(method, {params}) for read-only data, as Salesforce caches the response. For mutable data, use imperative Apex calls but consider implementing your own client-side caching if appropriate.

12. You need to handle multilingual labels in LWC. How?

Solution: Custom Labels

This is the standard Salesforce platform feature for inter-nationalization.

  1. Create Custom Labels: In Setup, go to “Custom Labels” and create labels for all your text strings (e.g., Button_Save, Header_My_Component). Provide translations for each supported language.
  2. Import in LWC: Import the custom labels into your LWC’s JavaScript file.
  3. Use in HTML: Reference the imported labels in your template.
// myComponent.js
import { LightningElement } from 'lwc';
import saveButtonLabel from '@salesforce/label/c.Button_Save';
import componentHeader from '@salesforce/label/c.Header_My_Component';

export default class MyComponent extends LightningElement {
    // Expose labels to template
    labels = {
        save: saveButtonLabel,
        header: componentHeader
    };
}
```html
<!-- myComponent.html -->
<template>
    <h1>{labels.header}</h1>
    <lightning-button label={labels.save}></lightning-button>
</template>

13. You have to provide dynamic picklist values in an LWC. How would you do that?

You can’t directly query for picklist values from a field in LWC. You need an Apex controller.

Solution: Use the UI API or Schema Class in Apex

  1. Create an Apex Method: Write a public, static, @AuraEnabled(cacheable=true) method that uses the Schema class to get the picklist values for a specific field on an object.
  2. Call from LWC: Use the @wire service in your LWC to call this Apex method.
  3. Map for lightning-combobox: The Apex method should return a list of Map<String, String> or a wrapper class that can be easily mapped to the label and value properties required by the lightning-combobox component.
// PicklistController.cls
public with sharing class PicklistController {
    @AuraEnabled(cacheable=true)
    public static List<Map<String, String>> getPicklistValues(String objectName, String fieldName) {
        List<Map<String, String>> options = new List<Map<String, String>>();
        Schema.DescribeFieldResult fieldResult = Schema.getGlobalDescribe().get(objectName)
            .getDescribe().fields.getMap().get(fieldName).getDescribe();

        for (Schema.PicklistEntry ple : fieldResult.getPicklistValues()) {
            options.add(new Map<String, String>{'label' => ple.getLabel(), 'value' => ple.getValue()});
        }
        return options;
    }
}


```javascript
// myComponent.js
import { LightningElement, wire } from 'lwc';
import getPicklistValues from '@salesforce/apex/PicklistController.getPicklistValues';

export default class MyComponent extends LightningElement {
    @wire(getPicklistValues, { objectName: 'Account', fieldName: 'Industry' })
    industryPicklist;
    // The wired property industryPicklist.data will contain the options.
}

14. How do you communicate between LWCs?

This depends on the relationship between the components.

  • Parent to Child: Pass data down via public properties (@api). When the property value changes in the parent, the child component re-renders.
  • Child to Parent: The child dispatches a Custom Event. The parent component listens for this event in its HTML markup and handles it in a JavaScript method.
    • Example (Child): this.dispatchEvent(new CustomEvent('notify', { detail: 'Some data' }));
    • Example (Parent HTML): <c-child-component onnotify={handleNotification}></c-child-component>
  • Unrelated Components: Use the Lightning Message Service (LMS). This is a pub-sub (publish/subscribe) model. One component publishes a message to a channel, and any other component (LWC, Aura, Visualforce) subscribed to that channel will receive the message.

Integration & Platform

Integration is a critical skill for any senior developer.

15. You have to integrate Salesforce with an external system. What approach would you take?

This is a high-level design question. Your answer should be a structured thought process.

  1. Clarify Requirements:
    • Direction: Is it Inbound (external system to Salesforce) or Outbound (Salesforce to external system)? Or bidirectional?
    • Timing: Is it real-time (synchronous) or asynchronous?
    • Volume: How many records per day/hour?
    • Security: How will authentication be handled?
  2. Choose the Right Tool/Pattern:
    • Outbound, Real-Time: Apex Callouts (REST or SOAP). Use Named Credentials to manage authentication securely.
    • Outbound, Asynchronous, Fire-and-Forget: Platform Events. Salesforce publishes an event, and a middleware tool (like MuleSoft) or the external system’s listener picks it up.
    • Outbound, Asynchronous, High Volume: Use Batch Apex to make callouts. Remember the callout limits per transaction.
    • Inbound, Real-Time: Create a custom Apex REST or SOAP service.
    • Inbound, Asynchronous: A middleware solution is often best to handle transformations and retries before calling the Salesforce APIs (REST/SOAP/Bulk).
  3. Consider a Middleware Solution (like MuleSoft): For complex, multi-system integrations, a dedicated integration platform is almost always the right answer. It handles orchestration, transformation, error handling, and retries far better than point-to-point code.

16. You need real-time sync between Salesforce and a payment gateway. What’s your solution?

“Real-time” points to synchronous or near-real-time patterns.

Solution: Apex Callout + Callback URL

  1. Payment Initiation (Outbound): When a user clicks “Pay” on an Opportunity, a Lightning component calls an Apex method. This method makes a synchronous REST callout to the payment gateway’s API to initiate the transaction. Use a Named Credential for security.
  2. Payment Confirmation (Inbound): The payment gateway needs to notify Salesforce when the payment is complete. Do not keep the initial connection open. Instead, provide the gateway with a callback URL. This URL should point to a public Apex REST Service you’ve created.
  3. Processing the Callback: When the gateway calls your Apex REST service with the payment status, the service updates the Opportunity record to “Paid” and performs any other necessary logic.

17. How do you ensure secure callouts in a multi-org or multi-environment scenario?

Solution: Named Credentials

Hardcoding URLs and credentials in Apex code or Custom Settings is insecure and inflexible. Named Credentials solve this.

  • What they do: They abstract the endpoint URL and the authentication details (e.g., username/password, OAuth token) away from the code.
  • How they work: In your Apex code, you reference the Named Credential’s name. In each environment (Dev, UAT, Prod), you configure that Named Credential to point to the correct endpoint (e.g., payment_gateway_sandbox vs. payment_gateway_prod) and use the appropriate credentials for that environment.
  • Benefit: The code remains identical across all environments. You only change the configuration, not the code, when deploying.
// No hardcoded URL!
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:My_Named_Credential/api/v1/payment'); // The magic is here
req.setMethod('POST');
// ...

You can get more information about named credential over here Salesforce Help: Named Credentials

Data, Performance & Security

These questions cover how you handle data efficiently and securely.

18. Your SOQL query is returning too many rows (more than 50,000). What’s your fix?

This hits the ‘Too many query rows’ governor limit.

  • For Synchronous Context: You can’t. You must refactor your logic to be more selective.
    • Add more filters to the WHERE clause.
    • Break down the problem. Can you process records in smaller, more targeted chunks?
  • For Asynchronous Context (Best Solution): Batch Apex. Batch Apex is designed specifically for this. It processes records in chunks (default 200, max 2,000) and each chunk gets its own set of governor limits. The start method’s Database.QueryLocator can fetch up to 50 million records.
public class ProcessLargeAccountSet implements Database.Batchable<sObject> {
    public Database.QueryLocator start(Database.BatchableContext bc) {
        return Database.getQueryLocator('SELECT Id, Name FROM Account WHERE Is_Active__c = true');
    }
    // ... execute and finish methods ...
}

19. How would you enforce field-level security (FLS) dynamically in Apex?

By default, Apex runs in system mode, which ignores FLS and object permissions. This can be a security risk.

Modern Approach (Best): WITH SECURITY_ENFORCED

This is a SOQL clause that enforces FLS and object-level permissions directly in the query. If a user doesn’t have access to a field or object, the query will throw an exception.

// Throws a QueryException if the user can't see Name or Industry
List<Account> accts = [SELECT Name, Industry FROM Account WITH SECURITY_ENFORCED];

Post-Query Check: stripInaccessible()

This method takes a list of sObjects and removes fields that the current user cannot access (based on FLS). It’s useful if you need to query the data first for system logic but want to ensure you don’t expose it on a UI.

List<Account> accts = [SELECT Name, AnnualRevenue FROM Account];
// Remove fields the user can't see before sending to a component
SObjectAccessDecision decision = Security.stripInaccessible(AccessType.READABLE, accts);
return decision.getRecords();

Manual Checks (Legacy)

Using Schema.DescribeFieldResult methods like isAccessible(), isCreatable(), isUpdatable(). This is verbose and should be avoided in favor of the newer methods.

20. You need to share a record with a specific team (Public Group) programmatically. What code would you write?

This involves creating a manual share record. Every object with a sharing model of “Private” or “Public Read Only” has a corresponding Share object (e.g., AccountShare, MyCustomObject__Share).

public static void shareRecordWithGroup(Id recordId, Id groupId) {
    // 1. Create a new Share record
    MyCustomObject__Share myShare = new MyCustomObject__Share();

    // 2. Set the ID of the record to share
    myShare.ParentId = recordId;

    // 3. Set the ID of the User or Group to share with
    myShare.UserOrGroupId = groupId;

    // 4. Set the access level (Read or Edit)
    myShare.AccessLevel = 'Edit';

    // 5. Set the reason for sharing (optional but good practice)
    myShare.RowCause = Schema.MyCustomObject__Share.RowCause.Manual;

    // 6. Insert the share record
    // Use a database method to handle potential errors gracefully
    Database.SaveResult sr = Database.insert(myShare, false);

    if (!sr.isSuccess()) {
        // Handle the error, e.g., user already has access
        System.debug('Error sharing record: ' + sr.getErrors()[0].getMessage());
    }
}

21. A client wants login auditing by IP address and browser. How do you build it?

Salesforce already provides this out-of-the-box. The key is knowing where to find it and how to use it.

Solution: Login History + Login Forensics

  1. Login History: This is a standard related list on the User record and a setup audit tool. It shows the last 20,000 logins for your org over the past 6 months. It includes IP, Browser, Application, and Status. You can build reports and dashboards on the LoginHistory object.
  2. Login Forensics: This is an advanced feature (requires the Event Monitoring add-on) that helps identify suspicious login activity. You can define policies to trigger actions (like requiring multi-factor authentication) when certain criteria are met.
  3. Custom Solution: If you need to store this data for longer than 6 months or trigger complex custom automation, you can use a Login Flow. A Login Flow launches every time a user logs in. You can use it to query the user’s IP and browser details and save them to a custom Login_Audit__c object.

Deployment & Best Practices

Showing you know how to work on a team and manage the application lifecycle is just as important as writing code.

22. How would you migrate metadata from a Dev Sandbox to Production safely?

A safe deployment minimizes risk and downtime.

  1. Version Control: All metadata changes should be committed to a Git repository first. This is the source of truth.
  2. Continuous Integration (CI): Use a CI tool like Jenkins, Gearset, or Copado. When changes are merged into a release branch in Git, the CI tool automatically attempts to validate the changes against a Staging/UAT sandbox. This catches errors early.
  3. Full Sandbox Testing (UAT): Deploy the final change set to a Full Sandbox, which is a recent copy of Production. Have business users perform User Acceptance Testing (UAT) here. This ensures the changes work with real data and processes.
  4. Validate in Production: Before the final deployment, perform a validation-only deployment to Production during off-peak hours. This runs all Apex tests and checks for component errors without actually committing any changes. This is the single most important step for a safe deployment.
  5. Schedule Deployment: Schedule the final deployment during a planned maintenance window. Use a CI/CD tool or Change Sets.
  6. Post-Deployment: Perform smoke tests in Production to verify key functionality is working as expected. Have a rollback plan ready (which is easier if you use version control).

23. What are your steps to prepare a project for a deployment review?

  1. Code Coverage: Ensure all new Apex code has at least 75% test coverage (aim for 90%+) and that all tests are passing.
  2. Code Quality & Best Practices: Run a static code analysis tool (like PMD) to check for issues like missing FLS checks, SOQL in loops, and hardcoded IDs.
  3. Documentation: Create a deployment plan document. It should include:
    • A list of all components being deployed (Apex classes, LWCs, fields, etc.).
    • Any pre-deployment and post-deployment manual steps (e.g., enabling a setting, running a data script).
    • The results of the Production validation.
    • A rollback plan.
  4. Peer Review: Have another developer review your code and deployment plan. They can often spot issues you’ve missed.

More Scenario-Based Questions to Prepare For Developers:

24. When should you use Future vs. Queueable vs. Batch Apex?

This question tests your understanding of asynchronous Apex and when to use the right tool for the job.

Feature@futureQueueableBatch
Primary UseSimple, fire-and-forget tasks; callouts.Chaining jobs; complex data types; monitoring.Processing very large data sets (thousands to millions).
ParametersPrimitives, arrays of primitives.sObjects, complex Apex types.N/A (uses a QueryLocator).
Job ChainingNo.Yes (System.enqueueJob).No (but can call a Queueable from the finish method).
MonitoringNo Job ID returned.Yes, a Job ID is returned.Yes, via AsyncApexJob.
Real-World UseCallout to a weather API on new Account.Complex onboarding: create User, then Contact, then assign Permission Set.Nightly de-duplication or data archiving for all Contacts.
  • Use @future when: You have a simple, isolated transaction you need to run in the background, like a callout from a trigger that doesn’t need to be monitored closely.
  • Use Queueable when: You need more control. It’s the modern successor to @future. Use it when you need to pass sObjects, chain jobs together (e.g., “do this, then do that”), or get a Job ID to monitor its status.
  • Use Batch when: Your primary concern is data volume. It’s the only tool designed to safely process millions of records without hitting governor limits by breaking the work into small, manageable chunks.

25. How do you write a test class for a callout?

You can’t make real HTTP callouts from a test. You must “mock” the callout, telling Apex what fake response to create.

Solution: Implement the HttpCalloutMock interface.

  1. Create a Mock Class: This class will contain the fake response data and status code.
  2. Set the Mock: In your test method, before you call the Apex method that performs the callout, use Test.setMock() to tell Salesforce to use your mock class instead of making a real HTTP request.
  3. Assert the Results: Verify that your code correctly handled the fake response.

Real-World Example: Testing a callout that gets shipping rates.

// The class that makes the callout
public class ShippingRateService {
    public static Decimal getRate() {
        HttpRequest req = new HttpRequest();
        req.setEndpoint('https://api.shipping.com/rates');
        req.setMethod('GET');
        HttpResponse res = new Http().send(req);
        Map<String, Object> results = (Map<String, Object>) JSON.deserializeUntyped(res.getBody());
        return (Decimal) results.get('rate');
    }
}

// The test class with the mock
@isTest
private class ShippingRateServiceTest {
    @isTest
    static void testGetRateSuccess() {
        // 1. Create the mock
        Test.setMock(HttpCalloutMock.class, new ShippingRateMock());

        // 2. Call the method and assert
        Test.startTest();
        Decimal rate = ShippingRateService.getRate();
        Test.stopTest();

        System.assertEquals(12.50, rate, 'The shipping rate should match the mock response.');
    }
}

// The mock implementation
public class ShippingRateMock implements HttpCalloutMock {
    public HttpResponse respond(HttpRequest req) {
        HttpResponse res = new HttpResponse();
        res.setHeader('Content-Type', 'application/json');
        res.setBody('{"rate": 12.50}');
        res.setStatusCode(200);
        return res;
    }
}

26. What is the Apex transaction ‘Order of Execution’? Why is it important?

The Order of Execution is the specific, non-negotiable sequence of events Salesforce performs when a record is saved. Knowing this is critical for debugging.

Simplified Order:

  1. Loads original record.
  2. before Triggers fire.
  3. System Validation rules run.
  4. Record is saved to the database (but not yet committed).
  5. after Triggers fire.
  6. Assignment Rules, Auto-Response Rules, Workflow Rules, and some Flows run.
  7. Roll-up Summary Fields are calculated.
  8. Transaction is committed to the database.

Why It’s Important (Real-World Scenario):

A user reports that when they set an Opportunity Stage to “Negotiation,” a custom “Discount Percentage” field they entered gets wiped out.

  • Your thought process: “The user enters the value, so it’s present before saving. Something is overwriting it. What runs after the user input but before the final commit?”
  • Debugging: Knowing the order of execution, you would check for a Workflow Rule or a Process Builder/Flow that might be firing on the “Negotiation” stage and incorrectly setting the discount field to null or zero. You know to look after the before triggers.

Further Reading: Salesforce Help: Triggers and Order of Execution

27. A user can’t see a new LWC on a record page, but you can. What’s the problem?

This is almost always a permissions issue. As an admin/developer, you have wide-open access, but users don’t.

Debugging Checklist:

  1. Apex Class Access: The most common culprit. The user’s Profile or a Permission Set they are assigned must have access to the LWC’s Apex controller class.
  2. Field-Level Security (FLS): Does the Apex controller query fields the user can’t see? Even if the class is accessible, FLS will prevent the fields from being returned, which could cause the component to fail or display no data.
  3. Object Permissions: Does the user have at least “Read” access to the object(s) being queried in the component?
  4. Component Visibility Filter: In the Lightning App Builder, check if the component has a visibility filter applied (e.g., “Show only when Stage = ‘Prospecting'”) that is hiding it for that user’s record.

28. How would you prevent a DML operation inside a loop?

This is a cardinal sin in Apex because it quickly hits the governor limit of 150 DML statements per transaction.

The Pattern: Collect and Process.

  1. Initialize a new List outside your loop.
  2. Inside the loop, perform your logic and add the record to be inserted/updated to the list.
  3. After the loop has finished, perform a single DML operation on the list.

Real-World Example: Updating all child Contacts when a parent Account’s address changes.

// BAD: DML inside loop
trigger AccountTrigger on Account (after update) {
    for (Account acc : Trigger.new) {
        if (acc.BillingStreet != Trigger.oldMap.get(acc.Id).BillingStreet) {
            List<Contact> childContacts = [SELECT Id FROM Contact WHERE AccountId = :acc.Id];
            for (Contact con : childContacts) {
                con.MailingStreet = acc.BillingStreet;
                update con; // DANGER! Will fail with more than 150 contacts.
            }
        }
    }
}

// GOOD: Collect and process
trigger AccountTrigger on Account (after update) {
    List<Contact> contactsToUpdate = new List<Contact>();
    for (Account acc : Trigger.new) {
        if (acc.BillingStreet != Trigger.oldMap.get(acc.Id).BillingStreet) {
            // Query all contacts related to the accounts in the trigger at once
            for (Contact con : [SELECT Id, MailingStreet FROM Contact WHERE AccountId = :acc.Id]) {
                con.MailingStreet = acc.BillingStreet;
                contactsToUpdate.add(con);
            }
        }
    }
    if (!contactsToUpdate.isEmpty()) {
        update contactsToUpdate; // SAFE! One DML statement for all records.
    }
}

29. You need to display an error message from an Apex catch block in an LWC. How?

You need to catch the error in JavaScript and use the ShowToastEvent to display it to the user.

The Pattern:

  1. Wrap your imperative Apex call in a try/catch block in your LWC’s JavaScript.
  2. Import ShowToastEvent from lightning/platformShowToastEvent.
  3. In the catch block, create and dispatch a new toast event, pulling the message from the error object.

Real-World Example:

// myLwc.js
import { LightningElement } from 'lwc';
import { ShowToastEvent } from 'lightning/platformShowToastEvent';
import doComplexServerAction from '@salesforce/apex/MyController.doComplexServerAction';

export default class MyLwc extends LightningElement {
    async handleSaveClick() {
        try {
            const result = await doComplexServerAction({ recordId: this.recordId });
            // Handle success
            this.dispatchEvent(new ShowToastEvent({
                title: 'Success',
                message: 'Record saved successfully!',
                variant: 'success'
            }));
        } catch (error) {
            // Handle error
            this.dispatchEvent(new ShowToastEvent({
                title: 'Error Saving Record',
                // Dig into the error object to find the user-friendly message
                message: error.body.message,
                variant: 'error',
                mode: 'sticky' // Keep the error on screen until dismissed
            }));
        }
    }
}

30. Explain the purpose of Test.startTest() and Test.stopTest().

These methods are essential for testing asynchronous code and managing governor limits within a test context.

What they do:

  • Test.startTest(): Marks a point in your test code to get a fresh, new set of governor limits.
  • Test.stopTest(): Marks the end of the “fresh limit” section. Crucially, it also forces any asynchronous processes (like @future or Queueable jobs) that were started after Test.startTest() to execute immediately and synchronously.

Real-World Use Case:

You must use this pair to test a “@future” method. Without Test.stopTest(), the test would finish before the future method even runs, making it impossible to assert its result.

@isTest
static void testFutureCallout() {
    // Setup data before starting the test
    Account acc = new Account(Name='Test Account');
    insert acc;

    Test.startTest();
        // Call the method that contains the @future callout
        AccountService.sendAccountToExternalSystem(acc.Id);
    Test.stopTest(); // This forces the future method to run NOW.

    // Now you can assert the results of the future method
    Account updatedAcc = [SELECT Name, Synced__c FROM Account WHERE Id = :acc.Id];
    System.assertEquals(true, updatedAcc.Synced__c, 'Account should be marked as synced.');
}

31. What are the pros and cons of using an LWC versus a Screen Flow for a user input screen?

This tests your ability to choose the right tool for the job.

Screen Flow:

  • Pros: Incredibly fast to build and modify for business logic changes. Great for multi-step wizards. Managed by admins/analysts. Built-in state management and navigation.
  • Cons: Limited UI/UX customization. Can feel less responsive than an LWC for complex, real-time validation. Harder to integrate with external JavaScript libraries.
  • Best Use Case: A guided, multi-page process for a service agent to capture customer issue details, where the questions might change frequently based on business needs.

Lightning Web Component (LWC):

  • Pros: Complete control over the look, feel, and responsiveness. Can build highly interactive, single-page application experiences. Superior performance. Easy to use third-party libraries.
  • Cons: Requires significant developer effort (JS, HTML, CSS, Apex). Slower to build and iterate on than a Flow.
  • Best Use Case: A complex product configuration tool where selections in one area instantly update options and pricing in another area, requiring heavy client-side logic.

32. How can you debug a failing Batch Apex job?

Debugging batch jobs can be tricky because they run in the background.

  1. Check the Apex Jobs Log: Go to Setup -> Apex Jobs. Find your job and click on it. Any unhandled exceptions or system-level errors will be shown here.
  2. Use System.debug(): Add debug statements to your start, execute, and finish methods. You can view these logs from the Apex Jobs page. This is good for checking variable values in a single chunk.
  3. Use Database.Stateful: If you need to track a value across multiple chunks (e.g., counting the total number of records processed), implement Database.Stateful in your class. This preserves the state of member variables.
  4. Create a Custom Log Object: This is the most robust method. Create a Log__c object. In your execute method’s catch block, insert a new Log__c record with the error message, stack trace, and the IDs of the records in the failing chunk. This gives you a permanent, queryable record of all failures.

33. What is a change set and what are its limitations?

A Change Set is a native Salesforce tool for moving metadata (components like objects, fields, code) from one related org to another (e.g., from a Dev Sandbox to a UAT Sandbox).

How it works:

You “upload” a change set from the source org and then “deploy” it in the target org.

Inbound change set

Key Limitations:

  • No Deletion: You cannot use a change set to delete components. This must be done manually in the target org, which is error-prone.
  • No Version Control: There’s no history, no branching, no merging. The org is the source of truth, not a Git repository.
  • Manual Process: The upload/deploy process is manual and can be slow. It’s difficult to automate.
  • Component Support: Not all metadata types are supported.
  • Real-World Impact: For a small team with simple changes, change sets are fine. For a large enterprise team, the lack of version control and automation makes them inefficient and risky. This is why teams adopt modern DevOps tools like Salesforce DX (SFDX), Gearset, or Copado.

34. You need to update 5 million records in Production. How do you do it?

This is a large data volume problem. The wrong approach can lock tables, hit limits, and cause major system issues.

The Right Tool: Batch Apex

This is the definitive answer. Batch Apex is designed from the ground up to process millions of records by breaking them into small, manageable chunks, each with its own governor limits.

The Wrong Tool: Data Loader

While Data Loader uses the Bulk API, running an update on 5 million records directly is risky. You have less control over chunk size, error handling, and retries than you do with a custom Batch Apex class. A single network issue could disrupt the entire job.

The Process:

  1. Write a Batch Apex class where the start method’s query locator selects the 5 million records (SELECT Id FROM Contact WHERE ...).
  2. The execute method contains the logic to update each record in the chunk.
  3. Thoroughly test the batch class in a Full Sandbox with a representative amount of data.
  4. Schedule the job to run during off-peak hours in Production.

35. Describe the purpose of an Apex Trigger Handler Framework.

A Trigger Handler Framework is a design pattern that separates trigger logic from the trigger file itself, leading to a much cleaner, more scalable codebase.

The Problem it Solves:

Putting all your logic directly in the .trigger file leads to a single, massive file that is impossible to read, test, and maintain. It also makes it hard to control the order of operations.

The Solution:

  1. The Trigger: The trigger file becomes a simple “dispatcher.” Its only job is to look at the context (e.g., Trigger.isBefore, Trigger.isUpdate) and call the appropriate method in the handler class.
  2. The Handler Class: A separate Apex class that contains all the actual business logic, broken down into well-named methods.

Key Benefits:

  • Readability & Maintainability: Logic is organized and easy to find.
  • Reusability: Handler methods can be called from other places if needed.
  • Testability: It’s easier to write focused unit tests for individual handler methods.
  • Bypass Logic: You can easily add a static boolean to the handler to disable all trigger logic during data loads.
  • Recursion Control: Centralizes the static flag to prevent recursion.

Summary

Success in a Salesforce technical interview comes down to two things: a solid grasp of platform fundamentals and the ability to articulate solutions to complex, real-world scenarios. Don’t just memorize answers. Practice explaining the why behind your chosen solution and be prepared to discuss alternatives.

Good luck and drop comment if you want more articles like this keep coming .


Checkout our latest Post too

Author

  • Salesforce Hours

    Salesforcehour is a platform built on a simple idea: "The best way to grow is to learn together". We request seasoned professionals from across the globe to share their hard-won expertise, giving you the in-depth tutorials and practical insights needed to accelerate your journey. Our mission is to empower you to solve complex challenges and become an invaluable member of the Ohana.


Discover more from Salesforce Hours

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Salesforce Hours

Subscribe now to keep reading and get access to the full archive.

Continue reading